qid & accept id: (44780, 45907) query: What's the best way to implement a SQL script that will grant permissions to a database role on all the user tables in a database? soup:
Dr Zimmerman is on the right track here. I'd be looking to write a stored procedure that has a cursor looping through user objects using execute immediate to affect the grant. Something like this:
\n IF EXISTS (\n SELECT 1 FROM sysobjects\n WHERE name = 'sp_grantastic'\n AND type = 'P'\n)\nDROP PROCEDURE sp_grantastic\nGO\nCREATE PROCEDURE sp_grantastic\nAS\nDECLARE\n @object_name VARCHAR(30)\n,@time VARCHAR(8)\n,@rights VARCHAR(20)\n,@role VARCHAR(20)\n\nDECLARE c_objects CURSOR FOR\n SELECT name\n FROM sysobjects\n WHERE type IN ('P', 'U', 'V')\n FOR READ ONLY\n\nBEGIN\n\n SELECT @rights = 'ALL'\n ,@role = 'PUBLIC'\n\n OPEN c_objects\n WHILE (1=1)\n BEGIN\n FETCH c_objects INTO @object_name\n IF @@SQLSTATUS <> 0 BREAK\n\n SELECT @time = CONVERT(VARCHAR, GetDate(), 108)\n PRINT '[%1!] hitting up object %2!', @time, @object_name\n EXECUTE('GRANT '+ @rights +' ON '+ @object_name+' TO '+@role)\n\n END\n\n PRINT '[%1!] fin!', @time\n\n CLOSE c_objects\n DEALLOCATE CURSOR c_objects\nEND\nGO\nGRANT ALL ON sp_grantastic TO PUBLIC\nGO\n\nThen you can fire and forget:
\nEXEC sp_grantastic\n\n
soup wrap:
Dr Zimmerman is on the right track here. I'd be looking to write a stored procedure that has a cursor looping through user objects using execute immediate to affect the grant. Something like this:
IF EXISTS (
SELECT 1 FROM sysobjects
WHERE name = 'sp_grantastic'
AND type = 'P'
)
DROP PROCEDURE sp_grantastic
GO
CREATE PROCEDURE sp_grantastic
AS
DECLARE
@object_name VARCHAR(30)
,@time VARCHAR(8)
,@rights VARCHAR(20)
,@role VARCHAR(20)
DECLARE c_objects CURSOR FOR
SELECT name
FROM sysobjects
WHERE type IN ('P', 'U', 'V')
FOR READ ONLY
BEGIN
SELECT @rights = 'ALL'
,@role = 'PUBLIC'
OPEN c_objects
WHILE (1=1)
BEGIN
FETCH c_objects INTO @object_name
IF @@SQLSTATUS <> 0 BREAK
SELECT @time = CONVERT(VARCHAR, GetDate(), 108)
PRINT '[%1!] hitting up object %2!', @time, @object_name
EXECUTE('GRANT '+ @rights +' ON '+ @object_name+' TO '+@role)
END
PRINT '[%1!] fin!', @time
CLOSE c_objects
DEALLOCATE CURSOR c_objects
END
GO
GRANT ALL ON sp_grantastic TO PUBLIC
GO
Then you can fire and forget:
EXEC sp_grantastic
qid & accept id:
(79789, 80134)
query:
Elegant method for drawing hourly bar chart from time-interval data?
soup:
Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:
\nTIME_DIM\n -id\n -time_of_day\n -interval_15 \n -interval_30\n\nan example of the data in the table would be
\nid time_of_day interval_15 interval_30\n1 00:00 00:00 00:00\n...\n30 00:23 00:15 00:00\n...\n100 05:44 05:30 05:30\n\nThen all you have to do is join your table to the time dimension and then group by interval_15. For example:
\nSELECT b.interval_15, count(*) \nFROM my_data_table a\nINNER JOIN time_dim b ON a.time_field = b.time\nWHERE a.date_field = now()\nGROUP BY b.interval_15\n\n
soup wrap:
Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:
TIME_DIM
-id
-time_of_day
-interval_15
-interval_30
an example of the data in the table would be
id time_of_day interval_15 interval_30
1 00:00 00:00 00:00
...
30 00:23 00:15 00:00
...
100 05:44 05:30 05:30
Then all you have to do is join your table to the time dimension and then group by interval_15. For example:
SELECT b.interval_15, count(*)
FROM my_data_table a
INNER JOIN time_dim b ON a.time_field = b.time
WHERE a.date_field = now()
GROUP BY b.interval_15
qid & accept id:
(128623, 131595)
query:
Disable all table constraints in Oracle
soup:
It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints.
\nIt's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause.
\nBEGIN\n FOR c IN\n (SELECT c.owner, c.table_name, c.constraint_name\n FROM user_constraints c, user_tables t\n WHERE c.table_name = t.table_name\n AND c.status = 'ENABLED'\n AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P')\n ORDER BY c.constraint_type DESC)\n LOOP\n dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name);\n END LOOP;\nEND;\n/\n\nEnabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key.
\nBEGIN\n FOR c IN\n (SELECT c.owner, c.table_name, c.constraint_name\n FROM user_constraints c, user_tables t\n WHERE c.table_name = t.table_name\n AND c.status = 'DISABLED'\n ORDER BY c.constraint_type)\n LOOP\n dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name);\n END LOOP;\nEND;\n/\n\n
soup wrap:
It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints.
It's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause.
BEGIN
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'ENABLED'
AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P')
ORDER BY c.constraint_type DESC)
LOOP
dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name);
END LOOP;
END;
/
Enabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key.
BEGIN
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'DISABLED'
ORDER BY c.constraint_type)
LOOP
dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name);
END LOOP;
END;
/
qid & accept id:
(182130, 182255)
query:
SQL - state machine - reporting on historical data based on changeset
soup:
This can be done, but would be a lot more efficient if you stored the end date of each log. With your model you have to do something like:
\nselect l1.userid\nfrom status_log l1\nwhere l1.status='s'\nand l1.logcreated = (select max(l2.logcreated)\n from status_log l2\n where l2.userid = l1.userid\n and l2.logcreated <= date '2008-02-15'\n );\n\nWith the additional column it woud be more like:
\nselect userid\nfrom status_log\nwhere status='s'\nand logcreated <= date '2008-02-15'\nand logsuperseded >= date '2008-02-15';\n\n(Apologies for any syntax errors, I don't know Postgresql.)
\nTo address some further issues raised by Phil:
\n\n\nA user might get moved from active, to suspended, to cancelled, to active again. This is a simplified version, in reality, there are even more states and people can be moved directly from one state to another.
\n
This would appear in the table like this:
\nuserid from to status\nFRED 2008-01-01 2008-01-31 s\nFRED 2008-02-01 2008-02-07 c\nFRED 2008-02-08 a\n\nI used a null for the "to" date of the current record. I could have used a future date like 2999-12-31 but null is preferable in some ways.
\n\n\nAdditionally, there would be no "end date" for the current status either, so I think this slightly breaks your query?
\n
Yes, my query would have to be re-written as
\nselect userid\nfrom status_log\nwhere status='s'\nand logcreated <= date '2008-02-15'\nand (logsuperseded is null or logsuperseded >= date '2008-02-15');\n\nA downside of this design is that whenever the user's status changes you have to end date their current status_log as well as create a new one. However, that isn't difficult, and I think the query advantage probably outweighs this.
\n soup wrap:This can be done, but would be a lot more efficient if you stored the end date of each log. With your model you have to do something like:
select l1.userid
from status_log l1
where l1.status='s'
and l1.logcreated = (select max(l2.logcreated)
from status_log l2
where l2.userid = l1.userid
and l2.logcreated <= date '2008-02-15'
);
With the additional column it woud be more like:
select userid
from status_log
where status='s'
and logcreated <= date '2008-02-15'
and logsuperseded >= date '2008-02-15';
(Apologies for any syntax errors, I don't know Postgresql.)
To address some further issues raised by Phil:
A user might get moved from active, to suspended, to cancelled, to active again. This is a simplified version, in reality, there are even more states and people can be moved directly from one state to another.
This would appear in the table like this:
userid from to status
FRED 2008-01-01 2008-01-31 s
FRED 2008-02-01 2008-02-07 c
FRED 2008-02-08 a
I used a null for the "to" date of the current record. I could have used a future date like 2999-12-31 but null is preferable in some ways.
Additionally, there would be no "end date" for the current status either, so I think this slightly breaks your query?
Yes, my query would have to be re-written as
select userid
from status_log
where status='s'
and logcreated <= date '2008-02-15'
and (logsuperseded is null or logsuperseded >= date '2008-02-15');
A downside of this design is that whenever the user's status changes you have to end date their current status_log as well as create a new one. However, that isn't difficult, and I think the query advantage probably outweighs this.
qid & accept id: (192220, 192462) query: What is the most efficient/elegant way to parse a flat table into a tree? soup:There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods:
\nAnother solution is called Nested Sets, and it can be stored in the same table too. Read "Trees and Hierarchies in SQL for Smarties" by Joe Celko for a lot more information on these designs.
\nI usually prefer a design called Closure Table (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy.
\nI cover Closure Table in my presentation Models for Hierarchical Data with SQL and PHP and in my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming.
\nCREATE TABLE ClosureTable (\n ancestor_id INT NOT NULL REFERENCES FlatTable(id),\n descendant_id INT NOT NULL REFERENCES FlatTable(id),\n PRIMARY KEY (ancestor_id, descendant_id)\n);\n\nStore all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question:
\nINSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES\n (1,1), (1,2), (1,4), (1,6),\n (2,2), (2,4),\n (3,3), (3,5),\n (4,4),\n (5,5),\n (6,6);\n\nNow you can get a tree starting at node 1 like this:
\nSELECT f.* \nFROM FlatTable f \n JOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1;\n\nThe output (in MySQL client) looks like the following:
\n+----+\n| id |\n+----+\n| 1 | \n| 2 | \n| 4 | \n| 6 | \n+----+\n\nIn other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1.
\nRe: comment from e-satis about immediate children (or immediate parent). You can add a "path_length" column to the ClosureTable to make it easier to query specifically for an immediate child or parent (or any other distance).
INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES\n (1,1,0), (1,2,1), (1,4,2), (1,6,1),\n (2,2,0), (2,4,1),\n (3,3,0), (3,5,1),\n (4,4,0),\n (5,5,0),\n (6,6,0);\n\nThen you can add a term in your search for querying the immediate children of a given node. These are descendants whose path_length is 1.
SELECT f.* \nFROM FlatTable f \n JOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1\n AND path_length = 1;\n\n+----+\n| id |\n+----+\n| 2 | \n| 6 | \n+----+\n\nRe comment from @ashraf: "How about sorting the whole tree [by name]?"
\nHere's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as name, and sort by the name.
SELECT f.name\nFROM FlatTable f \nJOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1\nORDER BY f.name;\n\nRe comment from @Nate:
\nSELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs\nFROM FlatTable f \nJOIN ClosureTable a ON (f.id = a.descendant_id) \nJOIN ClosureTable b ON (b.descendant_id = a.descendant_id) \nWHERE a.ancestor_id = 1 \nGROUP BY a.descendant_id \nORDER BY f.name\n\n+------------+-------------+\n| name | breadcrumbs |\n+------------+-------------+\n| Node 1 | 1 |\n| Node 1.1 | 1,2 |\n| Node 1.1.1 | 1,2,4 |\n| Node 1.2 | 1,6 |\n+------------+-------------+\n\nA user suggested an edit today. SO moderators approved the edit, but I am reversing it.
\nThe edit suggested that the ORDER BY in the last query above should be ORDER BY b.path_length, f.name, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2".
If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to MySQL Closure Table hierarchical database - How to pull information out in the correct order.
\n soup wrap:There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods:
Another solution is called Nested Sets, and it can be stored in the same table too. Read "Trees and Hierarchies in SQL for Smarties" by Joe Celko for a lot more information on these designs.
I usually prefer a design called Closure Table (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy.
I cover Closure Table in my presentation Models for Hierarchical Data with SQL and PHP and in my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming.
CREATE TABLE ClosureTable (
ancestor_id INT NOT NULL REFERENCES FlatTable(id),
descendant_id INT NOT NULL REFERENCES FlatTable(id),
PRIMARY KEY (ancestor_id, descendant_id)
);
Store all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question:
INSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES
(1,1), (1,2), (1,4), (1,6),
(2,2), (2,4),
(3,3), (3,5),
(4,4),
(5,5),
(6,6);
Now you can get a tree starting at node 1 like this:
SELECT f.*
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1;
The output (in MySQL client) looks like the following:
+----+
| id |
+----+
| 1 |
| 2 |
| 4 |
| 6 |
+----+
In other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1.
Re: comment from e-satis about immediate children (or immediate parent). You can add a "path_length" column to the ClosureTable to make it easier to query specifically for an immediate child or parent (or any other distance).
INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES
(1,1,0), (1,2,1), (1,4,2), (1,6,1),
(2,2,0), (2,4,1),
(3,3,0), (3,5,1),
(4,4,0),
(5,5,0),
(6,6,0);
Then you can add a term in your search for querying the immediate children of a given node. These are descendants whose path_length is 1.
SELECT f.*
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
AND path_length = 1;
+----+
| id |
+----+
| 2 |
| 6 |
+----+
Re comment from @ashraf: "How about sorting the whole tree [by name]?"
Here's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as name, and sort by the name.
SELECT f.name
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
ORDER BY f.name;
Re comment from @Nate:
SELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
JOIN ClosureTable b ON (b.descendant_id = a.descendant_id)
WHERE a.ancestor_id = 1
GROUP BY a.descendant_id
ORDER BY f.name
+------------+-------------+
| name | breadcrumbs |
+------------+-------------+
| Node 1 | 1 |
| Node 1.1 | 1,2 |
| Node 1.1.1 | 1,2,4 |
| Node 1.2 | 1,6 |
+------------+-------------+
A user suggested an edit today. SO moderators approved the edit, but I am reversing it.
The edit suggested that the ORDER BY in the last query above should be ORDER BY b.path_length, f.name, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2".
If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to MySQL Closure Table hierarchical database - How to pull information out in the correct order.
qid & accept id: (216007, 216020) query: How to determine total number of open/active connections in ms sql server 2005 soup:This shows the number of connections per each DB:
\nSELECT \n DB_NAME(dbid) as DBName, \n COUNT(dbid) as NumberOfConnections,\n loginame as LoginName\nFROM\n sys.sysprocesses\nWHERE \n dbid > 0\nGROUP BY \n dbid, loginame\n\nAnd this gives the total:
\nSELECT \n COUNT(dbid) as TotalConnections\nFROM\n sys.sysprocesses\nWHERE \n dbid > 0\n\nIf you need more detail, run:
\nsp_who2 'Active'\n\nNote: The SQL Server account used needs the 'sysadmin' role (otherwise it will just show a single row and a count of 1 as the result)
\n soup wrap:This shows the number of connections per each DB:
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame
And this gives the total:
SELECT
COUNT(dbid) as TotalConnections
FROM
sys.sysprocesses
WHERE
dbid > 0
If you need more detail, run:
sp_who2 'Active'
Note: The SQL Server account used needs the 'sysadmin' role (otherwise it will just show a single row and a count of 1 as the result)
qid & accept id: (289649, 289849) query: Remapping/Concatenating in SQL soup:Assuming that the column headings "john", "lucy" etc are fixed, you can group by the address field and use if() functions combined with aggregate operators to get your results:
\nselect max(if(forename='john',surname,null)) as john,\n max(if(forename='lucy',surname,null)) as lucy,\n max(if(forename='jenny',surname,null)) as jenny, \n max(if(forename='steve',surname,null)) as steve, \n max(if(forename='richard',surname,null)) as richard,\n address\nfrom tablename \ngroup by address;\n\nIt is a bit brittle though.
\nThere is also the group_concat function that can be used (within limits) to do something similar, but it will be ordered row-wise rather than column-wise as you appear to require.
\neg.
\nselect address, group_concat( concat( forename, surname ) ) tenants \nfrom tablename\ngroup by address;\n\n
soup wrap:
Assuming that the column headings "john", "lucy" etc are fixed, you can group by the address field and use if() functions combined with aggregate operators to get your results:
select max(if(forename='john',surname,null)) as john,
max(if(forename='lucy',surname,null)) as lucy,
max(if(forename='jenny',surname,null)) as jenny,
max(if(forename='steve',surname,null)) as steve,
max(if(forename='richard',surname,null)) as richard,
address
from tablename
group by address;
It is a bit brittle though.
There is also the group_concat function that can be used (within limits) to do something similar, but it will be ordered row-wise rather than column-wise as you appear to require.
eg.
select address, group_concat( concat( forename, surname ) ) tenants
from tablename
group by address;
qid & accept id:
(313962, 313995)
query:
PHP/MySQL: Retrieving the last *full* weeks entries
soup:
see the MySQL function YEARWEEK().
\nSo you could do something like
\nSELECT * FROM table WHERE YEARWEEK(purchased) = YEARWEEK(NOW());\n\nYou can change the starting day of the week by using a second mode parameter
\nWhat might be better however is to somehow calculate the date of 'last sunday at 00:00', and then the database would not have to run a function for each row, but I couldn't see an obvious way of doing that in MySQL. You could however easily generate this in php and do something like
\n$sunday = date(('Y-m-d H:i:s'), strtotime('last sunday 00:00'));\n$sql = "SELECT * FROM table WHERE purchased >= '$sunday'";\n\n
soup wrap:
see the MySQL function YEARWEEK().
So you could do something like
SELECT * FROM table WHERE YEARWEEK(purchased) = YEARWEEK(NOW());
You can change the starting day of the week by using a second mode parameter
What might be better however is to somehow calculate the date of 'last sunday at 00:00', and then the database would not have to run a function for each row, but I couldn't see an obvious way of doing that in MySQL. You could however easily generate this in php and do something like
$sunday = date(('Y-m-d H:i:s'), strtotime('last sunday 00:00'));
$sql = "SELECT * FROM table WHERE purchased >= '$sunday'";
qid & accept id:
(318528, 321624)
query:
How do you identify the triggers associated with a table in a sybase database?
soup:
I also found out that
\nsp_depends \n \nwill show you a lot of information about a table, including all triggers associated with it. Using that, along with Ray's query can make it much easier to find the triggers. Combined with this query from Ray's linked article:
\nsp_helptext \n \nand you can see the definition of the trigger:
\nsp_depends \n \nwill also show you all tables related to a trigger
\n soup wrap:I also found out that
sp_depends
will show you a lot of information about a table, including all triggers associated with it. Using that, along with Ray's query can make it much easier to find the triggers. Combined with this query from Ray's linked article:
sp_helptext
and you can see the definition of the trigger:
sp_depends
will also show you all tables related to a trigger
qid & accept id: (363084, 363089) query: MYSQL - How would I Export tables specifying only certain fields? soup:SELECT A,B,C\nFROM X\nINTO OUTFILE 'file name';\n\nYou need the FILE privilege to do this, and it won't overwrite files.
\nINTO OUTFILE has a bunch of options to it as well, such as FIELDS ENCLOSED BY, FIELDS ESCAPED BY, etc... that you may want to look up in the manual.
To produce a CSV file, you would do something like:
\nSELECT A,B,C\nINTO OUTFILE '/tmp/result.txt'\nFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'\nLINES TERMINATED BY '\n'\nFROM X;\n\nTo load the data back in from the file, use the LOAD DATA INFILE command with the same options you used to dump it out. For the CSV format above, that would be
LOAD DATA INFILE '/tmp/result.txt'\nINTO TABLE X\nFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'\nLINES TERMINATED BY '\n';\n\n
soup wrap:
SELECT A,B,C
FROM X
INTO OUTFILE 'file name';
You need the FILE privilege to do this, and it won't overwrite files.
INTO OUTFILE has a bunch of options to it as well, such as FIELDS ENCLOSED BY, FIELDS ESCAPED BY, etc... that you may want to look up in the manual.
To produce a CSV file, you would do something like:
SELECT A,B,C
INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM X;
To load the data back in from the file, use the LOAD DATA INFILE command with the same options you used to dump it out. For the CSV format above, that would be
LOAD DATA INFILE '/tmp/result.txt'
INTO TABLE X
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
qid & accept id:
(374079, 374191)
query:
Group repeated rows in TSQL
soup:
This is a set-based solution for the problem. The performance will probably suck, but it works :)
\nCREATE TABLE #LogEntries (\n ID INT IDENTITY,\n LogEntry VARCHAR(100)\n)\n\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('cabbage')\nINSERT INTO #LogEntries VALUES ('cabbage')\nINSERT INTO #LogEntries VALUES ('carrots')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('carrots')\n\nSELECT logentry, COUNT(*) FROM (\n SELECT logentry, \n ISNULL((SELECT MAX(id) FROM #logentries l2 WHERE l1.logentry<>l2.logentry AND l2.id < l1.id), 0) AS id\n FROM #LogEntries l1\n) AS a\nGROUP BY logentry, id\n\n\nDROP TABLE #logentries \n\nResults:
\nbeans 3\ncabbage 2\ncarrots 1\nbeans 2\ncarrots 1\n\nThe ISNULL() is required for the first set of beans.
\n soup wrap:This is a set-based solution for the problem. The performance will probably suck, but it works :)
CREATE TABLE #LogEntries (
ID INT IDENTITY,
LogEntry VARCHAR(100)
)
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('cabbage')
INSERT INTO #LogEntries VALUES ('cabbage')
INSERT INTO #LogEntries VALUES ('carrots')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('carrots')
SELECT logentry, COUNT(*) FROM (
SELECT logentry,
ISNULL((SELECT MAX(id) FROM #logentries l2 WHERE l1.logentry<>l2.logentry AND l2.id < l1.id), 0) AS id
FROM #LogEntries l1
) AS a
GROUP BY logentry, id
DROP TABLE #logentries
Results:
beans 3
cabbage 2
carrots 1
beans 2
carrots 1
The ISNULL() is required for the first set of beans.
qid & accept id: (379556, 380440) query: Time slicing in Oracle/SQL soup:In terms of getting the data out, you can use 'group by' and 'truncate' to slice the data into 1 minute intervals. eg:
\nSELECT user_name, truncate(event_time, 'YYYYMMDD HH24MI'), count(*)\nFROM job_table\nWHERE event_time > TO_DATE( some start date time)\nAND user_name IN ( list of users to query )\nGROUP BY user_name, truncate(event_time, 'YYYYMMDD HH24MI') \n\nThis will give you results like below (assuming there are 20 rows for alice between 8.00 and 8.01 and 40 rows between 8.01 and 8.02):
\nAlice 2008-12-16 08:00 20\nAlice 2008-12-16 08:01 40\n\n
soup wrap:
In terms of getting the data out, you can use 'group by' and 'truncate' to slice the data into 1 minute intervals. eg:
SELECT user_name, truncate(event_time, 'YYYYMMDD HH24MI'), count(*)
FROM job_table
WHERE event_time > TO_DATE( some start date time)
AND user_name IN ( list of users to query )
GROUP BY user_name, truncate(event_time, 'YYYYMMDD HH24MI')
This will give you results like below (assuming there are 20 rows for alice between 8.00 and 8.01 and 40 rows between 8.01 and 8.02):
Alice 2008-12-16 08:00 20
Alice 2008-12-16 08:01 40
qid & accept id:
(439138, 439387)
query:
Running total by grouped records in table
soup:
Do you really need the extra table?
\nYou can get that data you need with a simple query, which you can obviously create as a view if you want it to appear like a table.
\nThis will get you the data you are looking for:
\nselect \n account, bookdate, amount, \n sum(amount) over (partition by account order by bookdate) running_total\nfrom t\n/\n\nThis will create a view to show you the data as if it were a table:
\ncreate or replace view t2\nas\nselect \n account, bookdate, amount, \n sum(amount) over (partition by account order by bookdate) running_total \nfrom t\n/\n\nIf you really need the table, do you mean that you need it constantly updated? or just a one off? Obviously if it's a one off you can just "create table as select" using the above query.
\nTest data I used is:
\ncreate table t(account number, bookdate date, amount number);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080101', 'yyyymmdd'), 100);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080102', 'yyyymmdd'), 101);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080103', 'yyyymmdd'), -200);\n\ninsert into t(account, bookdate, amount) values (2, to_date('20080102', 'yyyymmdd'), 200);\n\ncommit;\n\nedit:
\nforgot to add; you specified that you wanted the table to be ordered - this doesn't really make sense, and makes me think that you really mean that you wanted the query/view - ordering is a result of the query you execute, not something that's inherant in the table (ignoring Index Organised Tables and the like).
\n soup wrap:Do you really need the extra table?
You can get that data you need with a simple query, which you can obviously create as a view if you want it to appear like a table.
This will get you the data you are looking for:
select
account, bookdate, amount,
sum(amount) over (partition by account order by bookdate) running_total
from t
/
This will create a view to show you the data as if it were a table:
create or replace view t2
as
select
account, bookdate, amount,
sum(amount) over (partition by account order by bookdate) running_total
from t
/
If you really need the table, do you mean that you need it constantly updated? or just a one off? Obviously if it's a one off you can just "create table as select" using the above query.
Test data I used is:
create table t(account number, bookdate date, amount number);
insert into t(account, bookdate, amount) values (1, to_date('20080101', 'yyyymmdd'), 100);
insert into t(account, bookdate, amount) values (1, to_date('20080102', 'yyyymmdd'), 101);
insert into t(account, bookdate, amount) values (1, to_date('20080103', 'yyyymmdd'), -200);
insert into t(account, bookdate, amount) values (2, to_date('20080102', 'yyyymmdd'), 200);
commit;
edit:
forgot to add; you specified that you wanted the table to be ordered - this doesn't really make sense, and makes me think that you really mean that you wanted the query/view - ordering is a result of the query you execute, not something that's inherant in the table (ignoring Index Organised Tables and the like).
qid & accept id: (501021, 501037) query: Python + SQLite query to find entries that sit in a specified time slot soup:SQLite3 doesn't have a datetime type, though it does have date and time functions.
\nTypically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.
With your datetimes stored this way, you simply use text comparisons such as
\nSELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'\n\nor
\nSELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;\n\n
soup wrap:
SQLite3 doesn't have a datetime type, though it does have date and time functions.
Typically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.
With your datetimes stored this way, you simply use text comparisons such as
SELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'
or
SELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;
qid & accept id:
(521270, 558434)
query:
Best way to implement a stored procedure with full text search
soup:
I agreed with above, look into AND clauses
\nSELECT TITLE\nFROM MOVIES\nWHERE CONTAINS(TITLE,'"hollywood*" AND "square*"')\n\nHowever you shouldn't have to split the input sentences, you can use variable
\nSELECT TITLE\nFROM MOVIES\nWHERE CONTAINS(TITLE,@parameter)\n\nby the way\nsearch for the exact term (contains)\nsearch for any term in the phrase (freetext)
\n soup wrap:I agreed with above, look into AND clauses
SELECT TITLE
FROM MOVIES
WHERE CONTAINS(TITLE,'"hollywood*" AND "square*"')
However you shouldn't have to split the input sentences, you can use variable
SELECT TITLE
FROM MOVIES
WHERE CONTAINS(TITLE,@parameter)
by the way search for the exact term (contains) search for any term in the phrase (freetext)
qid & accept id: (539942, 539951) query: Updating multiple rows with a value calculated from another column soup:SELECT SUBSTRING(colDate,0,8) as 'date' \nFROM someTable\n\nOr am I mistaken?
\nUPDATE someTable\nSET newDateField = SUBSTRING(colDate,0,8)\n\nWould likely work too. Untested.
\n soup wrap:SELECT SUBSTRING(colDate,0,8) as 'date'
FROM someTable
Or am I mistaken?
UPDATE someTable
SET newDateField = SUBSTRING(colDate,0,8)
Would likely work too. Untested.
qid & accept id: (556509, 556550) query: SQL : update statement with dynamic column value assignment soup:UPDATE mytable, (\n SELECT @loop := MAX(col1)\n FROM\n mytable\n ) o\nSET col1 = (@loop := @loop + 1)\n\nWhat you encountered here is called query stability.
No query can see the changes made by itself, or the following query:
\nUPDATE mytable\nSET col1 = col2 + 1\nWHERE col1 > col2 \n\nwould never end.
\n soup wrap:UPDATE mytable, (
SELECT @loop := MAX(col1)
FROM
mytable
) o
SET col1 = (@loop := @loop + 1)
What you encountered here is called query stability.
No query can see the changes made by itself, or the following query:
UPDATE mytable
SET col1 = col2 + 1
WHERE col1 > col2
would never end.
qid & accept id: (560694, 560760) query: adding a tags field to an asp.net web page soup:Here is an oversimplifed example. I am using c# but converting it to vb must be trivial. You will need to dig into lots of more details.
\nAssuming that you are using webforms, you need a textbox on your page:
\n \n\nAssuming that you have a submit button:
\n \n\nYou would have a SaveTags method that handles the click event:
\nprotected void SaveTags(object sender, EventArgs e)\n{\n string[] tags = txtTags.Text.Split(' ');\n\n SqlConnection connection = new SqlConnection("Your connection string");\n SqlCommand command = connection.CreateCommand("Insert Into Tags(tag) Values(@tag)");\n foreach (string tag in tags)\n {\n command.Parameters.Clear();\n command.Parameters.AddWithValue("@tag", tag);\n command.ExecuteNonQuery();\n }\n connection.Close();\n}\n\n
soup wrap:
Here is an oversimplifed example. I am using c# but converting it to vb must be trivial. You will need to dig into lots of more details.
Assuming that you are using webforms, you need a textbox on your page:
Assuming that you have a submit button:
You would have a SaveTags method that handles the click event:
protected void SaveTags(object sender, EventArgs e)
{
string[] tags = txtTags.Text.Split(' ');
SqlConnection connection = new SqlConnection("Your connection string");
SqlCommand command = connection.CreateCommand("Insert Into Tags(tag) Values(@tag)");
foreach (string tag in tags)
{
command.Parameters.Clear();
command.Parameters.AddWithValue("@tag", tag);
command.ExecuteNonQuery();
}
connection.Close();
}
qid & accept id:
(576071, 576147)
query:
coverage percentage using a complex sql query...?
soup:
SELECT AVG(covered)\nFROM (\n SELECT CASE WHEN COUNT(*) >= 2 THEN 1 ELSE 0 END AS covered\n FROM app a\n LEFT JOIN skill s ON (s.id_app = a.id AND s.lvl >= 2)\n GROUP BY a.id\n)\n\nMore efficient way for MySQL:
SELECT AVG\n (\n IFNULL\n (\n (\n SELECT 1\n FROM skill s\n WHERE s.id_app = a.id\n AND s.lvl >= 2\n LIMIT 1, 1\n ), 0\n )\n )\nFROM app a\n\nThis will stop counting as soon as it finds the second skilled person for each app.
Efficient if you have a few app's but lots of person's.
SELECT AVG(covered)
FROM (
SELECT CASE WHEN COUNT(*) >= 2 THEN 1 ELSE 0 END AS covered
FROM app a
LEFT JOIN skill s ON (s.id_app = a.id AND s.lvl >= 2)
GROUP BY a.id
)
More efficient way for MySQL:
SELECT AVG
(
IFNULL
(
(
SELECT 1
FROM skill s
WHERE s.id_app = a.id
AND s.lvl >= 2
LIMIT 1, 1
), 0
)
)
FROM app a
This will stop counting as soon as it finds the second skilled person for each app.
Efficient if you have a few app's but lots of person's.
Don't forget to use HEXTORAW(varchar2) when comparing this value to the RAW columns.
There is no implicit convesion from VARCHAR2 to RAW. That means that this clause:
WHERE raw_column = :varchar_value\n\nwill be impicitly converted into:
\nWHERE RAWTOHEX(raw_column) = :varchar_value\n\n, thus making indices on raw_column unusable.
Use:
\nWHERE raw_column = HEXTORAW(:varchar_value)\n\ninstead.
\n soup wrap:Don't forget to use HEXTORAW(varchar2) when comparing this value to the RAW columns.
There is no implicit convesion from VARCHAR2 to RAW. That means that this clause:
WHERE raw_column = :varchar_value
will be impicitly converted into:
WHERE RAWTOHEX(raw_column) = :varchar_value
, thus making indices on raw_column unusable.
Use:
WHERE raw_column = HEXTORAW(:varchar_value)
instead.
qid & accept id: (674776, 674801) query: Unified records for database query with Sql soup:You will need to join to your sub requester attribute table to the query twice. One with the attribute of Urgent and one with the attribute of Close.
\nYou will need to LEFT join to these for the instances where they may be null and then reference each of the tables in your SELECT to show the relevent attribute.
\nI also wouldn't reccomend the cross join. You should perform your "OR" join on the personnel table in the FROM clause rather than doing a cross join and filtering in the WHERE clause.
\nEDIT: Sorry, my first response was a bit rushed. Have now had a chance to look further. Due to the sub requester and the sub requester attribute both being duplicates you need to split them both up into a subquery. Also, your modified date could be different for both values. So i've doubled that up. This is completely untested, and by no means the "optimum" solution. It's quite tricky to write the query without the actual database to check against. Hopefully it will explain what I meant though.
\nSELECT\n r.RequesterID,\n p.FirstName + ' ' + p.LastName AS RequesterName,\n sra1.ModifiedDate as UrgentModifiedDate,\n sra1.AttributeValue as Urgent,\n sra2.ModifiedDate as ClosedModifiedDate,\n sra2.AttributeValue as Closed\nFROM\n Personnel AS p\nINNER JOIN\n Requester AS r \nON\n(\n r.UserID = p.ContractorID\nOR\n r.UserID = p.EmployeeID\n)\nLEFT OUTER JOIN\n(\n SELECT\n sr1.RequesterID,\n sr1.ModifiedDate,\n sa1.Attribute,\n sa1.AttributeValue\n FROM\n SubRequester AS sr1\n INNER JOIN\n SubRequesterAttribute AS sa1\n ON\n sr1.SubRequesterID = sa1.SubRequesterID\n AND\n sa1.Attribute = 'Urgent'\n) sra1\nON\n sra1.RequesterID = r.RequesterID\nLEFT OUTER JOIN\n(\n SELECT\n sr2.RequesterID,\n sr2.ModifiedDate,\n sa2.Attribute,\n sa2.AttributeValue\n FROM\n SubRequester AS sr2\n INNER JOIN\n SubRequesterAttribute AS sa2\n ON\n sr2.SubRequesterID = sa2.SubRequesterID\n AND\n sa2.Attribute = 'Closed'\n) sra1\nON\n sra2.RequesterID = r.RequesterID\n\nSECOND EDIT: My last edit was that there were multiple SubRequesters as well as multiple Attribute, from your last comment you want to show all SubRequesters and the two relevent attributes? You can achieve this as follows.
\nSELECT\n r.RequesterID,\n p.FirstName + ' ' + p.LastName AS RequesterName,\n sr.ModifiedDate,\n sa1.AttributeValue as Urgent,\n sa2.AttributeValue as Closed\nFROM\n Personnel AS p\nINNER JOIN\n Requester AS r \nON\n(\n r.UserID = p.ContractorID\nOR\n r.UserID = p.EmployeeID\n)\nINNER JOI N\n SubRequester as sr\nON\n sr.RequesterID = r.RequesterID\nLEFT OUTER JOIN\n SubRequesterAttribute AS sa1\nON\n sa1.SubRequesterID = sr.SubRequesterID\nAND\n sa1.Attribute = 'Urgent'\nLEFT OUTER JOIN\n SubRequesterAttribute AS sa2\nON\n sa2.SubRequesterID = sr.SubRequesterID\nAND\n sa2.Attribute = 'Closed'\n\n
soup wrap:
You will need to join to your sub requester attribute table to the query twice. One with the attribute of Urgent and one with the attribute of Close.
You will need to LEFT join to these for the instances where they may be null and then reference each of the tables in your SELECT to show the relevent attribute.
I also wouldn't reccomend the cross join. You should perform your "OR" join on the personnel table in the FROM clause rather than doing a cross join and filtering in the WHERE clause.
EDIT: Sorry, my first response was a bit rushed. Have now had a chance to look further. Due to the sub requester and the sub requester attribute both being duplicates you need to split them both up into a subquery. Also, your modified date could be different for both values. So i've doubled that up. This is completely untested, and by no means the "optimum" solution. It's quite tricky to write the query without the actual database to check against. Hopefully it will explain what I meant though.
SELECT
r.RequesterID,
p.FirstName + ' ' + p.LastName AS RequesterName,
sra1.ModifiedDate as UrgentModifiedDate,
sra1.AttributeValue as Urgent,
sra2.ModifiedDate as ClosedModifiedDate,
sra2.AttributeValue as Closed
FROM
Personnel AS p
INNER JOIN
Requester AS r
ON
(
r.UserID = p.ContractorID
OR
r.UserID = p.EmployeeID
)
LEFT OUTER JOIN
(
SELECT
sr1.RequesterID,
sr1.ModifiedDate,
sa1.Attribute,
sa1.AttributeValue
FROM
SubRequester AS sr1
INNER JOIN
SubRequesterAttribute AS sa1
ON
sr1.SubRequesterID = sa1.SubRequesterID
AND
sa1.Attribute = 'Urgent'
) sra1
ON
sra1.RequesterID = r.RequesterID
LEFT OUTER JOIN
(
SELECT
sr2.RequesterID,
sr2.ModifiedDate,
sa2.Attribute,
sa2.AttributeValue
FROM
SubRequester AS sr2
INNER JOIN
SubRequesterAttribute AS sa2
ON
sr2.SubRequesterID = sa2.SubRequesterID
AND
sa2.Attribute = 'Closed'
) sra1
ON
sra2.RequesterID = r.RequesterID
SECOND EDIT: My last edit was that there were multiple SubRequesters as well as multiple Attribute, from your last comment you want to show all SubRequesters and the two relevent attributes? You can achieve this as follows.
SELECT
r.RequesterID,
p.FirstName + ' ' + p.LastName AS RequesterName,
sr.ModifiedDate,
sa1.AttributeValue as Urgent,
sa2.AttributeValue as Closed
FROM
Personnel AS p
INNER JOIN
Requester AS r
ON
(
r.UserID = p.ContractorID
OR
r.UserID = p.EmployeeID
)
INNER JOI N
SubRequester as sr
ON
sr.RequesterID = r.RequesterID
LEFT OUTER JOIN
SubRequesterAttribute AS sa1
ON
sa1.SubRequesterID = sr.SubRequesterID
AND
sa1.Attribute = 'Urgent'
LEFT OUTER JOIN
SubRequesterAttribute AS sa2
ON
sa2.SubRequesterID = sr.SubRequesterID
AND
sa2.Attribute = 'Closed'
qid & accept id:
(684106, 684158)
query:
Find the smallest unused number in SQL Server
soup:
Find the first row where there does not exist a row with Id + 1
\nSELECT TOP 1 t1.Id+1 \nFROM table t1\nWHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1)\nORDER BY t1.Id\n\nEdit:
\nTo handle the special case where the lowest existing id is not 1, here is a ugly solution:
\nSELECT TOP 1 * FROM (\n SELECT t1.Id+1 AS Id\n FROM table t1\n WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1 )\n UNION \n SELECT 1 AS Id\n WHERE NOT EXISTS (SELECT * FROM table t3 WHERE t3.Id = 1)) ot\nORDER BY 1\n\n
soup wrap:
Find the first row where there does not exist a row with Id + 1
SELECT TOP 1 t1.Id+1
FROM table t1
WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1)
ORDER BY t1.Id
Edit:
To handle the special case where the lowest existing id is not 1, here is a ugly solution:
SELECT TOP 1 * FROM (
SELECT t1.Id+1 AS Id
FROM table t1
WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1 )
UNION
SELECT 1 AS Id
WHERE NOT EXISTS (SELECT * FROM table t3 WHERE t3.Id = 1)) ot
ORDER BY 1
qid & accept id:
(706664, 16797460)
query:
Generate SQL Create Scripts for existing tables with Query
soup:
Possible this be helpful for you. This script generate indexes, FK's, PK and common structure for any table.
\nFor example -
\nDDL:
\nCREATE TABLE [dbo].[WorkOut](\n [WorkOutID] [bigint] IDENTITY(1,1) NOT NULL,\n [TimeSheetDate] [datetime] NOT NULL,\n [DateOut] [datetime] NOT NULL,\n [EmployeeID] [int] NOT NULL,\n [IsMainWorkPlace] [bit] NOT NULL,\n [DepartmentUID] [uniqueidentifier] NOT NULL,\n [WorkPlaceUID] [uniqueidentifier] NULL,\n [TeamUID] [uniqueidentifier] NULL,\n [WorkShiftCD] [nvarchar](10) NULL,\n [WorkHours] [real] NULL,\n [AbsenceCode] [varchar](25) NULL,\n [PaymentType] [char](2) NULL,\n [CategoryID] [int] NULL,\n [Year] AS (datepart(year,[TimeSheetDate])),\n CONSTRAINT [PK_WorkOut] PRIMARY KEY CLUSTERED \n(\n [WorkOutID] ASC\n)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]\n) ON [PRIMARY]\n\nALTER TABLE [dbo].[WorkOut] ADD \nCONSTRAINT [DF__WorkOut__IsMainW__2C1E8537] DEFAULT ((1)) FOR [IsMainWorkPlace]\n\nALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID])\nREFERENCES [dbo].[Employee] ([EmployeeID])\n\nALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]\n\nQuery:
\nDECLARE @table_name SYSNAME\nSELECT @table_name = 'dbo.WorkOut'\n\nDECLARE \n @object_name SYSNAME\n , @object_id INT\n\nSELECT \n @object_name = '[' + s.name + '].[' + o.name + ']'\n , @object_id = o.[object_id]\nFROM sys.objects o WITH (NOWAIT)\nJOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]\nWHERE s.name + '.' + o.name = @table_name\n AND o.[type] = 'U'\n AND o.is_ms_shipped = 0\n\nDECLARE @SQL NVARCHAR(MAX) = ''\n\n;WITH index_column AS \n(\n SELECT \n ic.[object_id]\n , ic.index_id\n , ic.is_descending_key\n , ic.is_included_column\n , c.name\n FROM sys.index_columns ic WITH (NOWAIT)\n JOIN sys.columns c WITH (NOWAIT) ON ic.[object_id] = c.[object_id] AND ic.column_id = c.column_id\n WHERE ic.[object_id] = @object_id\n),\nfk_columns AS \n(\n SELECT \n k.constraint_object_id\n , cname = c.name\n , rcname = rc.name\n FROM sys.foreign_key_columns k WITH (NOWAIT)\n JOIN sys.columns rc WITH (NOWAIT) ON rc.[object_id] = k.referenced_object_id AND rc.column_id = k.referenced_column_id \n JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = k.parent_object_id AND c.column_id = k.parent_column_id\n WHERE k.parent_object_id = @object_id\n)\nSELECT @SQL = 'CREATE TABLE ' + @object_name + CHAR(13) + '(' + CHAR(13) + STUFF((\n SELECT CHAR(9) + ', [' + c.name + '] ' + \n CASE WHEN c.is_computed = 1\n THEN 'AS ' + cc.[definition] \n ELSE UPPER(tp.name) + \n CASE WHEN tp.name IN ('varchar', 'char', 'varbinary', 'binary', 'text')\n THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(5)) END + ')'\n WHEN tp.name IN ('nvarchar', 'nchar', 'ntext')\n THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length / 2 AS VARCHAR(5)) END + ')'\n WHEN tp.name IN ('datetime2', 'time2', 'datetimeoffset') \n THEN '(' + CAST(c.scale AS VARCHAR(5)) + ')'\n WHEN tp.name = 'decimal' \n THEN '(' + CAST(c.[precision] AS VARCHAR(5)) + ',' + CAST(c.scale AS VARCHAR(5)) + ')'\n ELSE ''\n END +\n CASE WHEN c.collation_name IS NOT NULL THEN ' COLLATE ' + c.collation_name ELSE '' END +\n CASE WHEN c.is_nullable = 1 THEN ' NULL' ELSE ' NOT NULL' END +\n CASE WHEN dc.[definition] IS NOT NULL THEN ' DEFAULT' + dc.[definition] ELSE '' END + \n CASE WHEN ic.is_identity = 1 THEN ' IDENTITY(' + CAST(ISNULL(ic.seed_value, '0') AS CHAR(1)) + ',' + CAST(ISNULL(ic.increment_value, '1') AS CHAR(1)) + ')' ELSE '' END \n END + CHAR(13)\n FROM sys.columns c WITH (NOWAIT)\n JOIN sys.types tp WITH (NOWAIT) ON c.user_type_id = tp.user_type_id\n LEFT JOIN sys.computed_columns cc WITH (NOWAIT) ON c.[object_id] = cc.[object_id] AND c.column_id = cc.column_id\n LEFT JOIN sys.default_constraints dc WITH (NOWAIT) ON c.default_object_id != 0 AND c.[object_id] = dc.parent_object_id AND c.column_id = dc.parent_column_id\n LEFT JOIN sys.identity_columns ic WITH (NOWAIT) ON c.is_identity = 1 AND c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id\n WHERE c.[object_id] = @object_id\n ORDER BY c.column_id\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, CHAR(9) + ' ')\n + ISNULL((SELECT CHAR(9) + ', CONSTRAINT [' + k.name + '] PRIMARY KEY (' + \n (SELECT STUFF((\n SELECT ', [' + c.name + '] ' + CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END\n FROM sys.index_columns ic WITH (NOWAIT)\n JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id\n WHERE ic.is_included_column = 0\n AND ic.[object_id] = k.parent_object_id \n AND ic.index_id = k.unique_index_id \n FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ''))\n + ')' + CHAR(13)\n FROM sys.key_constraints k WITH (NOWAIT)\n WHERE k.parent_object_id = @object_id \n AND k.[type] = 'PK'), '') + ')' + CHAR(13)\n + ISNULL((SELECT (\n SELECT CHAR(13) +\n 'ALTER TABLE ' + @object_name + ' WITH' \n + CASE WHEN fk.is_not_trusted = 1 \n THEN ' NOCHECK' \n ELSE ' CHECK' \n END + \n ' ADD CONSTRAINT [' + fk.name + '] FOREIGN KEY(' \n + STUFF((\n SELECT ', [' + k.cname + ']'\n FROM fk_columns k\n WHERE k.constraint_object_id = fk.[object_id]\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')\n + ')' +\n ' REFERENCES [' + SCHEMA_NAME(ro.[schema_id]) + '].[' + ro.name + '] ('\n + STUFF((\n SELECT ', [' + k.rcname + ']'\n FROM fk_columns k\n WHERE k.constraint_object_id = fk.[object_id]\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')\n + ')'\n + CASE \n WHEN fk.delete_referential_action = 1 THEN ' ON DELETE CASCADE' \n WHEN fk.delete_referential_action = 2 THEN ' ON DELETE SET NULL'\n WHEN fk.delete_referential_action = 3 THEN ' ON DELETE SET DEFAULT' \n ELSE '' \n END\n + CASE \n WHEN fk.update_referential_action = 1 THEN ' ON UPDATE CASCADE'\n WHEN fk.update_referential_action = 2 THEN ' ON UPDATE SET NULL'\n WHEN fk.update_referential_action = 3 THEN ' ON UPDATE SET DEFAULT' \n ELSE '' \n END \n + CHAR(13) + 'ALTER TABLE ' + @object_name + ' CHECK CONSTRAINT [' + fk.name + ']' + CHAR(13)\n FROM sys.foreign_keys fk WITH (NOWAIT)\n JOIN sys.objects ro WITH (NOWAIT) ON ro.[object_id] = fk.referenced_object_id\n WHERE fk.parent_object_id = @object_id\n FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)')), '')\n + ISNULL(((SELECT\n CHAR(13) + 'CREATE' + CASE WHEN i.is_unique = 1 THEN ' UNIQUE' ELSE '' END \n + ' NONCLUSTERED INDEX [' + i.name + '] ON ' + @object_name + ' (' +\n STUFF((\n SELECT ', [' + c.name + ']' + CASE WHEN c.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END\n FROM index_column c\n WHERE c.is_included_column = 0\n AND c.index_id = i.index_id\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')' \n + ISNULL(CHAR(13) + 'INCLUDE (' + \n STUFF((\n SELECT ', [' + c.name + ']'\n FROM index_column c\n WHERE c.is_included_column = 1\n AND c.index_id = i.index_id\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')', '') + CHAR(13)\n FROM sys.indexes i WITH (NOWAIT)\n WHERE i.[object_id] = @object_id\n AND i.is_primary_key = 0\n AND i.[type] = 2\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')\n ), '')\n\nPRINT @SQL\n--EXEC sys.sp_executesql @SQL\n\nOutput:
\nCREATE TABLE [dbo].[WorkOut]\n(\n [WorkOutID] BIGINT NOT NULL IDENTITY(1,1)\n , [TimeSheetDate] DATETIME NOT NULL\n , [DateOut] DATETIME NOT NULL\n , [EmployeeID] INT NOT NULL\n , [IsMainWorkPlace] BIT NOT NULL DEFAULT((1))\n , [DepartmentUID] UNIQUEIDENTIFIER NOT NULL\n , [WorkPlaceUID] UNIQUEIDENTIFIER NULL\n , [TeamUID] UNIQUEIDENTIFIER NULL\n , [WorkShiftCD] NVARCHAR(10) COLLATE Cyrillic_General_CI_AS NULL\n , [WorkHours] REAL NULL\n , [AbsenceCode] VARCHAR(25) COLLATE Cyrillic_General_CI_AS NULL\n , [PaymentType] CHAR(2) COLLATE Cyrillic_General_CI_AS NULL\n , [CategoryID] INT NULL\n , [Year] AS (datepart(year,[TimeSheetDate]))\n , CONSTRAINT [PK_WorkOut] PRIMARY KEY ([WorkOutID] ASC)\n)\n\nALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID]) REFERENCES [dbo].[Employee] ([EmployeeID])\nALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]\n\nCREATE NONCLUSTERED INDEX [IX_WorkOut_WorkShiftCD_AbsenceCode] ON [dbo].[WorkOut] ([WorkShiftCD] ASC, [AbsenceCode] ASC)\nINCLUDE ([WorkOutID], [WorkHours])\n\nAlso check this article -
\nHow to Generate a CREATE TABLE Script For an Existing Table: Part 1
\n soup wrap:Possible this be helpful for you. This script generate indexes, FK's, PK and common structure for any table.
For example -
DDL:
CREATE TABLE [dbo].[WorkOut](
[WorkOutID] [bigint] IDENTITY(1,1) NOT NULL,
[TimeSheetDate] [datetime] NOT NULL,
[DateOut] [datetime] NOT NULL,
[EmployeeID] [int] NOT NULL,
[IsMainWorkPlace] [bit] NOT NULL,
[DepartmentUID] [uniqueidentifier] NOT NULL,
[WorkPlaceUID] [uniqueidentifier] NULL,
[TeamUID] [uniqueidentifier] NULL,
[WorkShiftCD] [nvarchar](10) NULL,
[WorkHours] [real] NULL,
[AbsenceCode] [varchar](25) NULL,
[PaymentType] [char](2) NULL,
[CategoryID] [int] NULL,
[Year] AS (datepart(year,[TimeSheetDate])),
CONSTRAINT [PK_WorkOut] PRIMARY KEY CLUSTERED
(
[WorkOutID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
ALTER TABLE [dbo].[WorkOut] ADD
CONSTRAINT [DF__WorkOut__IsMainW__2C1E8537] DEFAULT ((1)) FOR [IsMainWorkPlace]
ALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID])
REFERENCES [dbo].[Employee] ([EmployeeID])
ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]
Query:
DECLARE @table_name SYSNAME
SELECT @table_name = 'dbo.WorkOut'
DECLARE
@object_name SYSNAME
, @object_id INT
SELECT
@object_name = '[' + s.name + '].[' + o.name + ']'
, @object_id = o.[object_id]
FROM sys.objects o WITH (NOWAIT)
JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
WHERE s.name + '.' + o.name = @table_name
AND o.[type] = 'U'
AND o.is_ms_shipped = 0
DECLARE @SQL NVARCHAR(MAX) = ''
;WITH index_column AS
(
SELECT
ic.[object_id]
, ic.index_id
, ic.is_descending_key
, ic.is_included_column
, c.name
FROM sys.index_columns ic WITH (NOWAIT)
JOIN sys.columns c WITH (NOWAIT) ON ic.[object_id] = c.[object_id] AND ic.column_id = c.column_id
WHERE ic.[object_id] = @object_id
),
fk_columns AS
(
SELECT
k.constraint_object_id
, cname = c.name
, rcname = rc.name
FROM sys.foreign_key_columns k WITH (NOWAIT)
JOIN sys.columns rc WITH (NOWAIT) ON rc.[object_id] = k.referenced_object_id AND rc.column_id = k.referenced_column_id
JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = k.parent_object_id AND c.column_id = k.parent_column_id
WHERE k.parent_object_id = @object_id
)
SELECT @SQL = 'CREATE TABLE ' + @object_name + CHAR(13) + '(' + CHAR(13) + STUFF((
SELECT CHAR(9) + ', [' + c.name + '] ' +
CASE WHEN c.is_computed = 1
THEN 'AS ' + cc.[definition]
ELSE UPPER(tp.name) +
CASE WHEN tp.name IN ('varchar', 'char', 'varbinary', 'binary', 'text')
THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(5)) END + ')'
WHEN tp.name IN ('nvarchar', 'nchar', 'ntext')
THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length / 2 AS VARCHAR(5)) END + ')'
WHEN tp.name IN ('datetime2', 'time2', 'datetimeoffset')
THEN '(' + CAST(c.scale AS VARCHAR(5)) + ')'
WHEN tp.name = 'decimal'
THEN '(' + CAST(c.[precision] AS VARCHAR(5)) + ',' + CAST(c.scale AS VARCHAR(5)) + ')'
ELSE ''
END +
CASE WHEN c.collation_name IS NOT NULL THEN ' COLLATE ' + c.collation_name ELSE '' END +
CASE WHEN c.is_nullable = 1 THEN ' NULL' ELSE ' NOT NULL' END +
CASE WHEN dc.[definition] IS NOT NULL THEN ' DEFAULT' + dc.[definition] ELSE '' END +
CASE WHEN ic.is_identity = 1 THEN ' IDENTITY(' + CAST(ISNULL(ic.seed_value, '0') AS CHAR(1)) + ',' + CAST(ISNULL(ic.increment_value, '1') AS CHAR(1)) + ')' ELSE '' END
END + CHAR(13)
FROM sys.columns c WITH (NOWAIT)
JOIN sys.types tp WITH (NOWAIT) ON c.user_type_id = tp.user_type_id
LEFT JOIN sys.computed_columns cc WITH (NOWAIT) ON c.[object_id] = cc.[object_id] AND c.column_id = cc.column_id
LEFT JOIN sys.default_constraints dc WITH (NOWAIT) ON c.default_object_id != 0 AND c.[object_id] = dc.parent_object_id AND c.column_id = dc.parent_column_id
LEFT JOIN sys.identity_columns ic WITH (NOWAIT) ON c.is_identity = 1 AND c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
WHERE c.[object_id] = @object_id
ORDER BY c.column_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, CHAR(9) + ' ')
+ ISNULL((SELECT CHAR(9) + ', CONSTRAINT [' + k.name + '] PRIMARY KEY (' +
(SELECT STUFF((
SELECT ', [' + c.name + '] ' + CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END
FROM sys.index_columns ic WITH (NOWAIT)
JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
WHERE ic.is_included_column = 0
AND ic.[object_id] = k.parent_object_id
AND ic.index_id = k.unique_index_id
FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ''))
+ ')' + CHAR(13)
FROM sys.key_constraints k WITH (NOWAIT)
WHERE k.parent_object_id = @object_id
AND k.[type] = 'PK'), '') + ')' + CHAR(13)
+ ISNULL((SELECT (
SELECT CHAR(13) +
'ALTER TABLE ' + @object_name + ' WITH'
+ CASE WHEN fk.is_not_trusted = 1
THEN ' NOCHECK'
ELSE ' CHECK'
END +
' ADD CONSTRAINT [' + fk.name + '] FOREIGN KEY('
+ STUFF((
SELECT ', [' + k.cname + ']'
FROM fk_columns k
WHERE k.constraint_object_id = fk.[object_id]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
+ ')' +
' REFERENCES [' + SCHEMA_NAME(ro.[schema_id]) + '].[' + ro.name + '] ('
+ STUFF((
SELECT ', [' + k.rcname + ']'
FROM fk_columns k
WHERE k.constraint_object_id = fk.[object_id]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
+ ')'
+ CASE
WHEN fk.delete_referential_action = 1 THEN ' ON DELETE CASCADE'
WHEN fk.delete_referential_action = 2 THEN ' ON DELETE SET NULL'
WHEN fk.delete_referential_action = 3 THEN ' ON DELETE SET DEFAULT'
ELSE ''
END
+ CASE
WHEN fk.update_referential_action = 1 THEN ' ON UPDATE CASCADE'
WHEN fk.update_referential_action = 2 THEN ' ON UPDATE SET NULL'
WHEN fk.update_referential_action = 3 THEN ' ON UPDATE SET DEFAULT'
ELSE ''
END
+ CHAR(13) + 'ALTER TABLE ' + @object_name + ' CHECK CONSTRAINT [' + fk.name + ']' + CHAR(13)
FROM sys.foreign_keys fk WITH (NOWAIT)
JOIN sys.objects ro WITH (NOWAIT) ON ro.[object_id] = fk.referenced_object_id
WHERE fk.parent_object_id = @object_id
FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)')), '')
+ ISNULL(((SELECT
CHAR(13) + 'CREATE' + CASE WHEN i.is_unique = 1 THEN ' UNIQUE' ELSE '' END
+ ' NONCLUSTERED INDEX [' + i.name + '] ON ' + @object_name + ' (' +
STUFF((
SELECT ', [' + c.name + ']' + CASE WHEN c.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END
FROM index_column c
WHERE c.is_included_column = 0
AND c.index_id = i.index_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')'
+ ISNULL(CHAR(13) + 'INCLUDE (' +
STUFF((
SELECT ', [' + c.name + ']'
FROM index_column c
WHERE c.is_included_column = 1
AND c.index_id = i.index_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')', '') + CHAR(13)
FROM sys.indexes i WITH (NOWAIT)
WHERE i.[object_id] = @object_id
AND i.is_primary_key = 0
AND i.[type] = 2
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
), '')
PRINT @SQL
--EXEC sys.sp_executesql @SQL
Output:
CREATE TABLE [dbo].[WorkOut]
(
[WorkOutID] BIGINT NOT NULL IDENTITY(1,1)
, [TimeSheetDate] DATETIME NOT NULL
, [DateOut] DATETIME NOT NULL
, [EmployeeID] INT NOT NULL
, [IsMainWorkPlace] BIT NOT NULL DEFAULT((1))
, [DepartmentUID] UNIQUEIDENTIFIER NOT NULL
, [WorkPlaceUID] UNIQUEIDENTIFIER NULL
, [TeamUID] UNIQUEIDENTIFIER NULL
, [WorkShiftCD] NVARCHAR(10) COLLATE Cyrillic_General_CI_AS NULL
, [WorkHours] REAL NULL
, [AbsenceCode] VARCHAR(25) COLLATE Cyrillic_General_CI_AS NULL
, [PaymentType] CHAR(2) COLLATE Cyrillic_General_CI_AS NULL
, [CategoryID] INT NULL
, [Year] AS (datepart(year,[TimeSheetDate]))
, CONSTRAINT [PK_WorkOut] PRIMARY KEY ([WorkOutID] ASC)
)
ALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID]) REFERENCES [dbo].[Employee] ([EmployeeID])
ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]
CREATE NONCLUSTERED INDEX [IX_WorkOut_WorkShiftCD_AbsenceCode] ON [dbo].[WorkOut] ([WorkShiftCD] ASC, [AbsenceCode] ASC)
INCLUDE ([WorkOutID], [WorkHours])
Also check this article -
How to Generate a CREATE TABLE Script For an Existing Table: Part 1
qid & accept id: (713960, 714247) query: How to drop IDENTITY property of column in SQL Server 2005 soup:IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:
\nCREATE TABLE #KeysToProcess\n(\n TempID int not null primary key identity(1,1)\n ,YourKey1 int not null\n ,YourKey2 int not null\n)\n\nINSERT INTO #KeysToProcess (YourKey1,YourKey2)\nSELECT TOP n YourKey1,YourKey2 FROM MyTable\n\nThe keys should not change very often (I hope) but other columns can with no harm to doing it this way.
\nget the @@ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to @@ROWCOUNT
\nand/or
\njust join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.
\n
This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.
\n-- Create empty temp table\nSELECT *\nINTO #TmpMikeMike\nFROM (SELECT\n m1.*\n FROM MyTable m1\n LEFT OUTER JOIN MyTable m2 ON m1.MyKey=m2.MyKey\n WHERE 1=0\n ) dt\n\nINSERT INTO #TmpMike\nSELECT TOP 1 * FROM MyTable\n\nSELECT * from #TmpMike\n\n
\nEDIT
\nTHIS WORKS, with no errors...
-- Create empty temp table\nSELECT *\nINTO #Tmp_MyTable\nFROM (SELECT\n m1.*\n FROM MyTable m1\n LEFT OUTER JOIN MyTable m2 ON m1.KeyValue=m2.KeyValue\n WHERE 1=0\n ) dt\n...\nWHILE ...\nBEGIN\n ...\n INSERT INTO #Tmp_MyTable\n SELECT TOP (@n) *\n FROM MyTable\n ...\n\nEND\n\nhowever, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.
\n soup wrap:IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:
CREATE TABLE #KeysToProcess
(
TempID int not null primary key identity(1,1)
,YourKey1 int not null
,YourKey2 int not null
)
INSERT INTO #KeysToProcess (YourKey1,YourKey2)
SELECT TOP n YourKey1,YourKey2 FROM MyTable
The keys should not change very often (I hope) but other columns can with no harm to doing it this way.
get the @@ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to @@ROWCOUNT
and/or
just join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.
This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.
-- Create empty temp table
SELECT *
INTO #TmpMikeMike
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.MyKey=m2.MyKey
WHERE 1=0
) dt
INSERT INTO #TmpMike
SELECT TOP 1 * FROM MyTable
SELECT * from #TmpMike
EDIT
THIS WORKS, with no errors...
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.KeyValue=m2.KeyValue
WHERE 1=0
) dt
...
WHILE ...
BEGIN
...
INSERT INTO #Tmp_MyTable
SELECT TOP (@n) *
FROM MyTable
...
END
however, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.
qid & accept id: (726582, 840879) query: Updates on PIVOTs in SQL Server 2008 soup:This will only really work if the pivoted columns form a unique identifier. So let's take Buggy's example; here is the original table:
\nTaskID Date Hours\n\nand we want to pivot it into a table that looks like this:
\nTaskID 11/15/1980 11/16/1980 11/17/1980 ... etc.\n\nIn order to create the pivot, you would do something like this:
\nDECLARE @FieldList NVARCHAR(MAX)\n\nSELECT\n @FieldList =\n CASE WHEN @FieldList <> '' THEN \n @FieldList + ', [' + [Date] + ']' \n ELSE \n '[' + [Date] + ']' \n END\nFROM\n Tasks\n\n\n\nDECLARE @PivotSQL NVARCHAR(MAX)\nSET @PivotSQL = \n '\n SELECT \n TaskID\n , ' + @FieldList + '\n INTO\n ##Pivoted\n FROM \n (\n SELECT * FROM Tasks\n ) AS T\n PIVOT\n (\n MAX(Hours) FOR T.[Date] IN (' + @FieldList + ') \n ) AS PVT\n '\n\nEXEC(@PivotSQL)\n\nSo then you have your pivoted table in ##Pivoted. Now you perform an update to one of the hours fields:
UPDATE\n ##Pivoted\nSET\n [11/16/1980 00:00:00] = 10\nWHERE\n TaskID = 1234\n\nNow ##Pivoted has an updated version of the hours for a task that took place on 11/16/1980 and we want to save that back to the original table, so we use an UNPIVOT:
DECLARE @UnPivotSQL NVarChar(MAX)\nSET @UnPivotSQL = \n '\n SELECT\n TaskID\n , [Date]\n , [Hours]\n INTO \n ##UnPivoted\n FROM\n ##Pivoted\n UNPIVOT\n (\n Value FOR [Date] IN (' + @FieldList + ')\n ) AS UP\n\n '\n\nEXEC(@UnPivotSQL)\n\nUPDATE\n Tasks\nSET\n [Hours] = UP.[Hours]\nFROM\n Tasks T\nINNER JOIN\n ##UnPivoted UP\nON\n T.TaskID = UP.TaskID\n\nYou'll notice that I modified Buggy's example to remove aggregation by day-of-week. That's because there's no going back and updating if you perform any sort of aggregation. If I update the SUNHours field, how do I know which Sunday's hours I'm updating? This will only work if there is no aggregation. I hope this helps!
\n soup wrap:This will only really work if the pivoted columns form a unique identifier. So let's take Buggy's example; here is the original table:
TaskID Date Hours
and we want to pivot it into a table that looks like this:
TaskID 11/15/1980 11/16/1980 11/17/1980 ... etc.
In order to create the pivot, you would do something like this:
DECLARE @FieldList NVARCHAR(MAX)
SELECT
@FieldList =
CASE WHEN @FieldList <> '' THEN
@FieldList + ', [' + [Date] + ']'
ELSE
'[' + [Date] + ']'
END
FROM
Tasks
DECLARE @PivotSQL NVARCHAR(MAX)
SET @PivotSQL =
'
SELECT
TaskID
, ' + @FieldList + '
INTO
##Pivoted
FROM
(
SELECT * FROM Tasks
) AS T
PIVOT
(
MAX(Hours) FOR T.[Date] IN (' + @FieldList + ')
) AS PVT
'
EXEC(@PivotSQL)
So then you have your pivoted table in ##Pivoted. Now you perform an update to one of the hours fields:
UPDATE
##Pivoted
SET
[11/16/1980 00:00:00] = 10
WHERE
TaskID = 1234
Now ##Pivoted has an updated version of the hours for a task that took place on 11/16/1980 and we want to save that back to the original table, so we use an UNPIVOT:
DECLARE @UnPivotSQL NVarChar(MAX)
SET @UnPivotSQL =
'
SELECT
TaskID
, [Date]
, [Hours]
INTO
##UnPivoted
FROM
##Pivoted
UNPIVOT
(
Value FOR [Date] IN (' + @FieldList + ')
) AS UP
'
EXEC(@UnPivotSQL)
UPDATE
Tasks
SET
[Hours] = UP.[Hours]
FROM
Tasks T
INNER JOIN
##UnPivoted UP
ON
T.TaskID = UP.TaskID
You'll notice that I modified Buggy's example to remove aggregation by day-of-week. That's because there's no going back and updating if you perform any sort of aggregation. If I update the SUNHours field, how do I know which Sunday's hours I'm updating? This will only work if there is no aggregation. I hope this helps!
qid & accept id: (778909, 778922) query: Most efficent method for adding leading 0's to an int in sql soup:That is pretty much the way: Adding Leading Zeros To Integer Values
\nSo, to save following the link, the query looks like this, where #Numbers is the table and Num is the column:
SELECT RIGHT('000000000' + CONVERT(VARCHAR(8),Num), 8) FROM #Numbers\n\nfor negative or positive values
\ndeclare @v varchar(6)\nselect @v = -5\n\nSELECT case when @v < 0 \nthen '-' else '' end + RIGHT('00000' + replace(@v,'-',''), 5) \n\n
soup wrap:
That is pretty much the way: Adding Leading Zeros To Integer Values
So, to save following the link, the query looks like this, where #Numbers is the table and Num is the column:
SELECT RIGHT('000000000' + CONVERT(VARCHAR(8),Num), 8) FROM #Numbers
for negative or positive values
declare @v varchar(6)
select @v = -5
SELECT case when @v < 0
then '-' else '' end + RIGHT('00000' + replace(@v,'-',''), 5)
qid & accept id:
(802027, 802046)
query:
In SQL, how do you get the top N rows ordered by a certain column?
soup:
Definition: Limit is used to limit your MySQL query results to those that fall within a specified range. You can use it to show the first X number of results, or to show a range from X - Y results. It is phrased as Limit X, Y and included at the end of your query. X is the starting point (remember the first record is 0) and Y is the duration (how many records to display).\nAlso Known As: Range Results\nExamples:
\nSELECT * FROM `your_table` LIMIT 0, 10 \n\nThis will display the first 10 results from the database.
\nSELECT * FROM `your_table` LIMIT 5, 5 \n\nThis will show records 6, 7, 8, 9, and 10
\nMore from About.com
\n soup wrap:Definition: Limit is used to limit your MySQL query results to those that fall within a specified range. You can use it to show the first X number of results, or to show a range from X - Y results. It is phrased as Limit X, Y and included at the end of your query. X is the starting point (remember the first record is 0) and Y is the duration (how many records to display). Also Known As: Range Results Examples:
SELECT * FROM `your_table` LIMIT 0, 10
This will display the first 10 results from the database.
SELECT * FROM `your_table` LIMIT 5, 5
This will show records 6, 7, 8, 9, and 10
More from About.com
qid & accept id: (852225, 852247) query: persons where the children are grouped for their parent soup:I recommend you to split this into two queries.
\nFirst, get a list of parents:
\nSELECT *\nFROM Persons\nWHERE id IN (SELECT parent FROM Persons)\nORDER BY (age, id)\n\nThen get a properly sorted list of children:
\nSELECT Child.*\nFROM Persons AS Child\n JOIN Persons AS Parent ON (Parent.id = Child.parent)\nORDER BY (Parent.age, Parent.id, Child.age, Child.id)\n\nThe two lists can then easily be merged on the id/parent since they are both sorted first by parent's age.
I recommend you to split this into two queries.
First, get a list of parents:
SELECT *
FROM Persons
WHERE id IN (SELECT parent FROM Persons)
ORDER BY (age, id)
Then get a properly sorted list of children:
SELECT Child.*
FROM Persons AS Child
JOIN Persons AS Parent ON (Parent.id = Child.parent)
ORDER BY (Parent.age, Parent.id, Child.age, Child.id)
The two lists can then easily be merged on the id/parent since they are both sorted first by parent's age.
select list_id, address_id, count(*) as count\nfrom LIST_MEMBERSHIPS\ngroup by 1, 2\norder by 3 desc\n\nYou may find it useful to add
\nhaving count > 1\n\n
soup wrap:
select list_id, address_id, count(*) as count
from LIST_MEMBERSHIPS
group by 1, 2
order by 3 desc
You may find it useful to add
having count > 1
qid & accept id:
(938232, 938272)
query:
SQL Pivot on subset
soup:
Here's an attempt at PIVOT:
\nselect *\nfrom YourTable\nPIVOT (sum(amount) FOR Method in (Cash,Check)) as Y\n\nGiven that it's just two columns, could try with a join:
\nselect\n type\n, cash = a.amount\n, check = b.amount\nfrom yourtable a\nfull join yourtable b on a.type = b.type\nwhere a.method = 'cash' or b.method = 'Check'\n\n
soup wrap:
Here's an attempt at PIVOT:
select *
from YourTable
PIVOT (sum(amount) FOR Method in (Cash,Check)) as Y
Given that it's just two columns, could try with a join:
select
type
, cash = a.amount
, check = b.amount
from yourtable a
full join yourtable b on a.type = b.type
where a.method = 'cash' or b.method = 'Check'
qid & accept id:
(951401, 951768)
query:
SQL 2005 Split Comma Separated Column on Delimiter
soup:
Yes, it's possible with CROSS APPLY (SQL 2005+):
\nwith testdata (CommaColumn, ValueColumn1, ValueColumn2) as (\n select 'ABC,123', 1, 2 union all\n select 'XYZ, 789', 2, 3\n ) \nselect \n b.items as SplitValue\n, a.ValueColumn1\n, a.ValueColumn2\nfrom testdata a\ncross apply dbo.Split(a.CommaColumn,',') b\n\nNotes:
\nYou should add an index to the result set of your split column, so that it returns two columns, IndexNumber and Value.
In-line implementations with a numbers table are generally faster than your procedural version here.
eg:
\ncreate function [dbo].[Split] (@list nvarchar(max), @delimiter nchar(1) = N',')\nreturns table\nas\nreturn (\n select \n Number = row_number() over (order by Number)\n , [Value] = ltrim(rtrim(convert(nvarchar(4000),\n substring(@list, Number\n , charindex(@delimiter, @list+@delimiter, Number)-Number\n )\n )))\n from dbo.Numbers\n where Number <= convert(int, len(@list))\n and substring(@delimiter + @list, Number, 1) = @delimiter\n )\n\nErland Sommarskog has the definitive page on this, I think: http://www.sommarskog.se/arrays-in-sql-2005.html
\n soup wrap:Yes, it's possible with CROSS APPLY (SQL 2005+):
with testdata (CommaColumn, ValueColumn1, ValueColumn2) as (
select 'ABC,123', 1, 2 union all
select 'XYZ, 789', 2, 3
)
select
b.items as SplitValue
, a.ValueColumn1
, a.ValueColumn2
from testdata a
cross apply dbo.Split(a.CommaColumn,',') b
Notes:
You should add an index to the result set of your split column, so that it returns two columns, IndexNumber and Value.
In-line implementations with a numbers table are generally faster than your procedural version here.
eg:
create function [dbo].[Split] (@list nvarchar(max), @delimiter nchar(1) = N',')
returns table
as
return (
select
Number = row_number() over (order by Number)
, [Value] = ltrim(rtrim(convert(nvarchar(4000),
substring(@list, Number
, charindex(@delimiter, @list+@delimiter, Number)-Number
)
)))
from dbo.Numbers
where Number <= convert(int, len(@list))
and substring(@delimiter + @list, Number, 1) = @delimiter
)
Erland Sommarskog has the definitive page on this, I think: http://www.sommarskog.se/arrays-in-sql-2005.html
qid & accept id: (955927, 955972) query: What SQL would I need to use to list all the stored procedures on an Oracle database? soup:The DBA_OBJECTS view will list the procedures (as well as almost any other object):
SELECT owner, object_name\nFROM dba_objects \nWHERE object_type = 'PROCEDURE'\n\nThe DBA_SOURCE view will list the lines of source code for a procedure in question:
SELECT line, text\nFROM dba_source\nWHERE owner = ?\n AND name = ?\n AND type = 'PROCEDURE'\nORDER BY line\n\nNote: Depending on your privileges, you may not be able to query the DBA_OBJECTS and DBA_SOURCE views. In this case, you can use ALL_OBJECTS and ALL_SOURCE instead. The DBA_ views contain all objects in the database, whereas the ALL_ views contain only those objects that you may access.
The DBA_OBJECTS view will list the procedures (as well as almost any other object):
SELECT owner, object_name
FROM dba_objects
WHERE object_type = 'PROCEDURE'
The DBA_SOURCE view will list the lines of source code for a procedure in question:
SELECT line, text
FROM dba_source
WHERE owner = ?
AND name = ?
AND type = 'PROCEDURE'
ORDER BY line
Note: Depending on your privileges, you may not be able to query the DBA_OBJECTS and DBA_SOURCE views. In this case, you can use ALL_OBJECTS and ALL_SOURCE instead. The DBA_ views contain all objects in the database, whereas the ALL_ views contain only those objects that you may access.
This can be easily achieved with a simple SQL statement using MySQL's replace() function. Before we do that, you should definitely do a database dump or whatever you use for backups. It's not only that it's The Right Thing To Do™, but if you make a mistake on your substitution, it might prove difficult to undo it (yes, you could rollback, but you might only figure out your mistake later on.)
To create a database dump from MySQL, you can run something like this --
\nmysqldump -h hostname -u username -p databasename > my_sql_dump.sql\n\nWhere (and you probably know this, but for the sake of completeness for future generations...) --
\nNow that we got that out of the way, you can log in to the MySQL database using:
\nmysql -h hostname -u username -p databasename\n\nAnd simply run this statement:
\nUPDATE `wp-posts` SET `post-content` = REPLACE(`post-content`, "http://oldurl.com", "http://newurl.com");\n\nAnd that should do it!
\nIf you make a mistake, you can often rerun the statement with the original and new texts inverted (if the new text -- in your case the new URL -- didn't already exist in the text before you did the replace.) Sometimes this is not possible depending on what the new text was (again, not likely in your case.) Anyway, you can always try recovering the sql dump --
\ncat my_sql_dump.sql | mysql -h hostname -u username -p databasename\n\nAnd voilà.
\n soup wrap:This can be easily achieved with a simple SQL statement using MySQL's replace() function. Before we do that, you should definitely do a database dump or whatever you use for backups. It's not only that it's The Right Thing To Do™, but if you make a mistake on your substitution, it might prove difficult to undo it (yes, you could rollback, but you might only figure out your mistake later on.)
To create a database dump from MySQL, you can run something like this --
mysqldump -h hostname -u username -p databasename > my_sql_dump.sql
Where (and you probably know this, but for the sake of completeness for future generations...) --
Now that we got that out of the way, you can log in to the MySQL database using:
mysql -h hostname -u username -p databasename
And simply run this statement:
UPDATE `wp-posts` SET `post-content` = REPLACE(`post-content`, "http://oldurl.com", "http://newurl.com");
And that should do it!
If you make a mistake, you can often rerun the statement with the original and new texts inverted (if the new text -- in your case the new URL -- didn't already exist in the text before you did the replace.) Sometimes this is not possible depending on what the new text was (again, not likely in your case.) Anyway, you can always try recovering the sql dump --
cat my_sql_dump.sql | mysql -h hostname -u username -p databasename
And voilà.
qid & accept id: (1019661, 1019944) query: Finding Start and End Dates from Date Numbers Table (Date Durations) soup:Assuming the Day IDs are always sequential for a partial solution...
\nselect *\n from employee_schedule a \n where not exists( select * \n from employee_schedule b \n where a.employeeid = b.employeeid\n and a.projectid = b.projectid \n and (a.dayid - 1) = b.dayid )\n\nlists the start day IDs:
\n ID EMPLOYEEID PROJECTID DAYID \n 1 64 2 168 \n 5 64 1 169 \n 9 64 2 182 \n\n\n\nselect *\n from employee_schedule a \n where not exists( select * \n from employee_schedule b \n where a.employeeid = b.employeei\n and a.projectid = b.projectid\n and (a.dayid + 1) = b.dayid )\n\nlists the end day IDs:
\n ID EMPLOYEEID PROJECTID DAYID \n 4 64 2 171 \n 8 64 1 172 \n 11 64 2 184 \n\n
soup wrap:
Assuming the Day IDs are always sequential for a partial solution...
select *
from employee_schedule a
where not exists( select *
from employee_schedule b
where a.employeeid = b.employeeid
and a.projectid = b.projectid
and (a.dayid - 1) = b.dayid )
lists the start day IDs:
ID EMPLOYEEID PROJECTID DAYID
1 64 2 168
5 64 1 169
9 64 2 182
select *
from employee_schedule a
where not exists( select *
from employee_schedule b
where a.employeeid = b.employeei
and a.projectid = b.projectid
and (a.dayid + 1) = b.dayid )
lists the end day IDs:
ID EMPLOYEEID PROJECTID DAYID
4 64 2 171
8 64 1 172
11 64 2 184
qid & accept id:
(1069311, 1069388)
query:
Passing an array of parameters to a stored procedure
soup:
Use a stored procedure:
\nEDIT:\nA complement for serialize List (or anything else):
\nList testList = new List();\n\ntestList.Add(1);\ntestList.Add(2);\ntestList.Add(3);\n\nXmlSerializer xs = new XmlSerializer(typeof(List));\nMemoryStream ms = new MemoryStream();\nxs.Serialize(ms, testList);\n\nstring resultXML = UTF8Encoding.UTF8.GetString(ms.ToArray());\n \nThe result (ready to use with XML parameter):
\n\n\n 1 \n 2 \n 3 \n \n\nORIGINAL POST:
\nPassing XML as parameter:
\n\n 1 \n 2 \n \n\nCREATE PROCEDURE [dbo].[DeleteAllData]\n(\n @XMLDoc XML\n)\nAS\nBEGIN\n\nDECLARE @handle INT\n\nEXEC sp_xml_preparedocument @handle OUTPUT, @XMLDoc\n\nDELETE FROM\n YOURTABLE\nWHERE\n YOUR_ID_COLUMN NOT IN (\n SELECT * FROM OPENXML (@handle, '/ids/id') WITH (id INT '.') \n )\nEXEC sp_xml_removedocument @handle\n\nUse a stored procedure:
EDIT: A complement for serialize List (or anything else):
List testList = new List();
testList.Add(1);
testList.Add(2);
testList.Add(3);
XmlSerializer xs = new XmlSerializer(typeof(List));
MemoryStream ms = new MemoryStream();
xs.Serialize(ms, testList);
string resultXML = UTF8Encoding.UTF8.GetString(ms.ToArray());
The result (ready to use with XML parameter):
1
2
3
ORIGINAL POST:
Passing XML as parameter:
1
2
CREATE PROCEDURE [dbo].[DeleteAllData]
(
@XMLDoc XML
)
AS
BEGIN
DECLARE @handle INT
EXEC sp_xml_preparedocument @handle OUTPUT, @XMLDoc
DELETE FROM
YOURTABLE
WHERE
YOUR_ID_COLUMN NOT IN (
SELECT * FROM OPENXML (@handle, '/ids/id') WITH (id INT '.')
)
EXEC sp_xml_removedocument @handle
Just do the order details condition in the usual way:
\nfrom o in orders\njoin od from orderdetails on o.id = od.orderid\n into details\nwhere details.status == 'A'\nselect new { Order = o, Details = details}\n\n(NB. Details is a sequence, with each matching details record, LINQ operators like First and FirstOrDefault can be use to extract just one.)
\nOr use an expression as the data source
\nfrom o in orders\njoin od from orderdetails.Where(d => d.Status == 'A') on o.id = od.orderid\n into details\nselect new { Order = o, Details = details}\n\nOr even, use another comprehension expression as the source expression:
\nfrom o in orders\njoin od from (from d in orderdetails\n where d.Status == 'A'\n select d)\n on o.id = od.orderid\n into details\nselect new { Order = o, Details = details}\n\n(Setting you DataContext's Log property allows you to see the SQL so you can compare what SQL is actually generated.)
EDIT: Change to use Group Join (... into var) to get the outer join (rather than an inner join).
Just do the order details condition in the usual way:
from o in orders
join od from orderdetails on o.id = od.orderid
into details
where details.status == 'A'
select new { Order = o, Details = details}
(NB. Details is a sequence, with each matching details record, LINQ operators like First and FirstOrDefault can be use to extract just one.)
Or use an expression as the data source
from o in orders
join od from orderdetails.Where(d => d.Status == 'A') on o.id = od.orderid
into details
select new { Order = o, Details = details}
Or even, use another comprehension expression as the source expression:
from o in orders
join od from (from d in orderdetails
where d.Status == 'A'
select d)
on o.id = od.orderid
into details
select new { Order = o, Details = details}
(Setting you DataContext's Log property allows you to see the SQL so you can compare what SQL is actually generated.)
EDIT: Change to use Group Join (... into var) to get the outer join (rather than an inner join).
A post at:
\n\nSuggested using this syntax for placement of the
\n http://schemas.microsoft.com/sharepoint/soap/GetListItems \n \n \n \n {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7} \n \n \n \n \n \n \n \n 1 \n \n \n \n \n \n \n \n * \n \n\nHowever this would give me the following error:
\nFailed to execute web request for the specified URL
\nWith the following in the details:
\nElement <Query> of parameter query is missing or invalid
\nFrom looking at the SOAP message with Microsoft Network Monitor, it looks as though the
However, I was able to get this to work using the method described in Martin Kurek's response at:
\n\nSo, I used this as my query:
\n\n http://schemas.microsoft.com/sharepoint/soap/GetListItems \n \n \n \n {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7} \n \n \n \n \n \n * \n \n\nAnd then defined a parameter on the dataset named query, with the following value:
\n1 \n\nI was also able to make my query dependent on a report parameter, by setting the query dataset parameter to the following expression:
\n="" & \nParameters!TaskID.Value & \n" "\n\n
soup wrap:
A post at:
Suggested using this syntax for placement of the
http://schemas.microsoft.com/sharepoint/soap/GetListItems
{CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}
1
*
However this would give me the following error:
Failed to execute web request for the specified URL
With the following in the details:
Element <Query> of parameter query is missing or invalid
From looking at the SOAP message with Microsoft Network Monitor, it looks as though the
However, I was able to get this to work using the method described in Martin Kurek's response at:
So, I used this as my query:
http://schemas.microsoft.com/sharepoint/soap/GetListItems
{CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}
*
And then defined a parameter on the dataset named query, with the following value:
1
I was also able to make my query dependent on a report parameter, by setting the query dataset parameter to the following expression:
="" &
Parameters!TaskID.Value &
" "
qid & accept id:
(1146012, 1146072)
query:
Join on multiple booleans
soup:
It's not really a SQL problem you're asking, just a boolean expression problem. I assume you've got another column in these tables that allows you to join the rows in t1 to t2, but following your examples (where there is only 1 row in t1), you can do it as:
\n SELECT t2.A2\n , t2.B2\n , t3.C2\n FROM t1\n , t2\n WHERE (t2.A2 OR NOT T1.A1)\n AND (t2.B2 OR NOT T1.B1)\n AND (t2.C2 OR NOT T1.C1)\n;\n\nI now see the non-abstracted answer you've posted above. Based on that, there are some issues in your SQL. For one thing, you should be expressing only the conditions in your JOIN clauses that relate the vw_fbScheduleFull table to the fbDivision table (i.e. the foreign/primary key relationship); all the LowerDivision/UpperDivision/SeniorDivision stuff should be in the WHERE clause.
\nSecondly, you're ignoring the operator precedence of the AND and OR operators - you want to enclose each of the *Division pairs within parens to avoid undesirable effects.
\nNot knowing the full schema of the tables, I would guess that the proper version of this query would look something like this:
\n SELECT vw_fbScheduleFull.LocationName\n , vw_fbScheduleFull.FieldName\n , vw_fbScheduleFull.Description\n , vw_fbScheduleFull.StartTime\n , vw_fbScheduleFull.EndTime\n , vw_fbScheduleFull.LowerDivision\n , vw_fbScheduleFull.UpperDivision\n , vw_fbScheduleFull.SeniorDivision\n FROM vw_fbScheduleFull \n , fbDivision\n WHERE vw_fbScheduleFull.PracticeDate = ?\n AND vw_fbScheduleFull.Locked IS NULL \n AND fbDivision.DivisionName = ?\n AND (vw_fbScheduleFull.LowerDivision = 1 OR fbDivision.LowerDivision <> 1)\n AND (vw_fbScheduleFull.UpperDivision = 1 OR fbDivision.UpperDivision <> 1)\n AND (vw_fbScheduleFull.SeniorDivision = 1 OR fbDivision.SeniorDivision <> 1)\nORDER BY vw_fbScheduleFull.LocationName\n , vw_fbScheduleFull.FieldName\n , vw_fbScheduleFull.StartTime \n;\n\nLooking one more time, I realize that your "fbDivision.DivisionName = ?" probably is reducing the number of rows in that table to one, and that there isn't a formal PK/FK relationship between those two tables. In which case, you should dispense with the INNER JOIN nomenclature in the FROM clause and just list the two tables; I've updated my example.
\n soup wrap:It's not really a SQL problem you're asking, just a boolean expression problem. I assume you've got another column in these tables that allows you to join the rows in t1 to t2, but following your examples (where there is only 1 row in t1), you can do it as:
SELECT t2.A2
, t2.B2
, t3.C2
FROM t1
, t2
WHERE (t2.A2 OR NOT T1.A1)
AND (t2.B2 OR NOT T1.B1)
AND (t2.C2 OR NOT T1.C1)
;
I now see the non-abstracted answer you've posted above. Based on that, there are some issues in your SQL. For one thing, you should be expressing only the conditions in your JOIN clauses that relate the vw_fbScheduleFull table to the fbDivision table (i.e. the foreign/primary key relationship); all the LowerDivision/UpperDivision/SeniorDivision stuff should be in the WHERE clause.
Secondly, you're ignoring the operator precedence of the AND and OR operators - you want to enclose each of the *Division pairs within parens to avoid undesirable effects.
Not knowing the full schema of the tables, I would guess that the proper version of this query would look something like this:
SELECT vw_fbScheduleFull.LocationName
, vw_fbScheduleFull.FieldName
, vw_fbScheduleFull.Description
, vw_fbScheduleFull.StartTime
, vw_fbScheduleFull.EndTime
, vw_fbScheduleFull.LowerDivision
, vw_fbScheduleFull.UpperDivision
, vw_fbScheduleFull.SeniorDivision
FROM vw_fbScheduleFull
, fbDivision
WHERE vw_fbScheduleFull.PracticeDate = ?
AND vw_fbScheduleFull.Locked IS NULL
AND fbDivision.DivisionName = ?
AND (vw_fbScheduleFull.LowerDivision = 1 OR fbDivision.LowerDivision <> 1)
AND (vw_fbScheduleFull.UpperDivision = 1 OR fbDivision.UpperDivision <> 1)
AND (vw_fbScheduleFull.SeniorDivision = 1 OR fbDivision.SeniorDivision <> 1)
ORDER BY vw_fbScheduleFull.LocationName
, vw_fbScheduleFull.FieldName
, vw_fbScheduleFull.StartTime
;
Looking one more time, I realize that your "fbDivision.DivisionName = ?" probably is reducing the number of rows in that table to one, and that there isn't a formal PK/FK relationship between those two tables. In which case, you should dispense with the INNER JOIN nomenclature in the FROM clause and just list the two tables; I've updated my example.
qid & accept id: (1154702, 1154723) query: SQL Checking for NULL and incrementals soup:This kind of incremental querying is just not efficient. You'll get better results by saying - "I'll never need more than 100 results so give me these" :
\nSELECT top 100 *\nFROM news\nORDER BY date desc\n\nThen filtering further on the client side if you want only a particular day's items (such as the items with a common date as the first item in the result).
\nOr, you could transform your multiple query request into a two query request:
\nDECLARE\n @theDate datetime,\n @theDate2 datetime\n\nSET @theDate = (SELECT Max(date) FROM news)\n --trim the time off of @theDate\nSET @theDate = DateAdd(dd, DateDiff(dd, 0, @theDate), 0)\nSET @theDate2 = DateAdd(dd, 1, @theDate)\n\nSELECT *\nFROM news\nWHERE @theDate <= date AND date < @theDate2\nORDER BY date desc\n\n
soup wrap:
This kind of incremental querying is just not efficient. You'll get better results by saying - "I'll never need more than 100 results so give me these" :
SELECT top 100 *
FROM news
ORDER BY date desc
Then filtering further on the client side if you want only a particular day's items (such as the items with a common date as the first item in the result).
Or, you could transform your multiple query request into a two query request:
DECLARE
@theDate datetime,
@theDate2 datetime
SET @theDate = (SELECT Max(date) FROM news)
--trim the time off of @theDate
SET @theDate = DateAdd(dd, DateDiff(dd, 0, @theDate), 0)
SET @theDate2 = DateAdd(dd, 1, @theDate)
SELECT *
FROM news
WHERE @theDate <= date AND date < @theDate2
ORDER BY date desc
qid & accept id:
(1179355, 1179472)
query:
Oracle Minus - From a list of values, how do I count ONLY non reversed values
soup:
Minus operations use distinct sets. Try this instead:
\nselect row_number() over (partition by name_id, val order by name_id, val), name_id, val \nfrom check_minus\nwhere val > 0\n minus\nselect row_number() over (partition by name_id, val order by name_id, val), name_id, abs(val) \nfrom check_minus\nwhere val < 0\n\nIt produces:
\nRowNum Name_Id Val\n1, 1, 20\n2, 1, 5\n2, 1, 15\n3, 1, 15\n\n
soup wrap:
Minus operations use distinct sets. Try this instead:
select row_number() over (partition by name_id, val order by name_id, val), name_id, val
from check_minus
where val > 0
minus
select row_number() over (partition by name_id, val order by name_id, val), name_id, abs(val)
from check_minus
where val < 0
It produces:
RowNum Name_Id Val
1, 1, 20
2, 1, 5
2, 1, 15
3, 1, 15
qid & accept id:
(1197943, 1319077)
query:
Creating public synonym at system level
soup:
Try creating a view called MASTER_MYVIEW first (you may need to deal with privileges there as well):
\ncreate view master_myview as select ...;\n\nThen create a public synonym for that new view:
\ncreate or replace public synonym master_myview for .master_myview;\n \n
soup wrap:
Try creating a view called MASTER_MYVIEW first (you may need to deal with privileges there as well):
create view master_myview as select ...;
Then create a public synonym for that new view:
create or replace public synonym master_myview for .master_myview;
qid & accept id:
(1207740, 1207791)
query:
Make a query Count() return 0 instead of empty
soup:
Replace the Count statements with
Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,\n\nI'm not a fan of the nested Iifs, but it doesn't look like there's any way around them, since DateDiff and BETWEEN...AND were not playing nicely.
To prune ItemNames without any added dates, the query block had to be enclosed in a larger query, since checking against a calculated field cannot be done from inside a query. The end result is this query:
SELECT *\nFROM \n (\n SELECT DISTINCT Source.ItemName AS InvestmentManager, \n Sum(Iif(DateDiff("d",DateAdded,Date())>=20,Iif(DateDiff("d",DateAdded,Date())<=44,'1','0'),'0')) AS BTWN_20_44,\n Sum(Iif(DateDiff("d",DateAdded,Date())>=45,Iif(DateDiff("d",DateAdded,Date())<=60,'1','0'),'0')) AS BTWN_45_60,\n Sum(Iif(DateDiff("d",DateAdded,Date())>=61,Iif(DateDiff("d",DateAdded,Date())<=90,'1','0'),'0')) AS BTWN_61_90,\n Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,\n Sum(Iif(DateDiff("d",DateAdded,Date())>180,'1','0')) AS GT_180,\n Sum(Iif(DateDiff("d",DateAdded,Date())>=20,'1','0')) AS Total\n FROM Source\n WHERE CompleteState='FAILED'\n GROUP BY ItemName\n )\nWHERE Total > 0;\n\n
soup wrap:
Replace the Count statements with
Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,
I'm not a fan of the nested Iifs, but it doesn't look like there's any way around them, since DateDiff and BETWEEN...AND were not playing nicely.
To prune ItemNames without any added dates, the query block had to be enclosed in a larger query, since checking against a calculated field cannot be done from inside a query. The end result is this query:
SELECT *
FROM
(
SELECT DISTINCT Source.ItemName AS InvestmentManager,
Sum(Iif(DateDiff("d",DateAdded,Date())>=20,Iif(DateDiff("d",DateAdded,Date())<=44,'1','0'),'0')) AS BTWN_20_44,
Sum(Iif(DateDiff("d",DateAdded,Date())>=45,Iif(DateDiff("d",DateAdded,Date())<=60,'1','0'),'0')) AS BTWN_45_60,
Sum(Iif(DateDiff("d",DateAdded,Date())>=61,Iif(DateDiff("d",DateAdded,Date())<=90,'1','0'),'0')) AS BTWN_61_90,
Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,
Sum(Iif(DateDiff("d",DateAdded,Date())>180,'1','0')) AS GT_180,
Sum(Iif(DateDiff("d",DateAdded,Date())>=20,'1','0')) AS Total
FROM Source
WHERE CompleteState='FAILED'
GROUP BY ItemName
)
WHERE Total > 0;
qid & accept id:
(1263780, 1263795)
query:
SQL - Find patterns of records
soup:
If it's one song after another, assuming a table named tblSongs with a 'sequence' & 'name' column. You might want to try something like
\nselect top N first.name, second.name, count(*)\nfrom tblSongs as first \n inner join tblSongs as second\n on second.sequence=first.sequence + 1\ngroup by first.name, second.name\norder by count(*) desc\n\nIf song sequence X,Y is counted the same as Y,X then
\nselect top N first.name, second.name, count(*)\nfrom tblSongs as first \n inner join tblSongs as second\n on second.sequence=first.sequence + 1\n or second.sequence=first.sequence - 1\ngroup by first.name, second.name\norder by count(*) desc\n\nIf you are looking for any pattern of 2 song sequences, then
\nselect first.name, second.name, abs(second.sequence - first.sequence) as spacing_count\nfrom tblSongs as first \n inner join tblSongs as second\n on second.sequence=first.sequence + 1\n or second.sequence=first.sequence - 1\n\nThen do some statistical analysis on the spacing_count (which is beyond me).
\nI believe those will get you started.
\n soup wrap:If it's one song after another, assuming a table named tblSongs with a 'sequence' & 'name' column. You might want to try something like
select top N first.name, second.name, count(*)
from tblSongs as first
inner join tblSongs as second
on second.sequence=first.sequence + 1
group by first.name, second.name
order by count(*) desc
If song sequence X,Y is counted the same as Y,X then
select top N first.name, second.name, count(*)
from tblSongs as first
inner join tblSongs as second
on second.sequence=first.sequence + 1
or second.sequence=first.sequence - 1
group by first.name, second.name
order by count(*) desc
If you are looking for any pattern of 2 song sequences, then
select first.name, second.name, abs(second.sequence - first.sequence) as spacing_count
from tblSongs as first
inner join tblSongs as second
on second.sequence=first.sequence + 1
or second.sequence=first.sequence - 1
Then do some statistical analysis on the spacing_count (which is beyond me).
I believe those will get you started.
qid & accept id: (1326701, 1326746) query: Single or multiple INSERTs based on values SELECTed soup:It's not trivial. First, you need another column "Flag" which is 0:
\nINSERT INTO Results (year, month, day, hour, duration, court, Flag)\nSELECT DATEPART (yy, b.StartDateTime),\n DATEPART (mm, b.StartDateTime),\n DATEPART (dd, b.StartDateTime),\n DATEPART (hh, b.StartDateTime),\n a.Duration,\n a.Court,\n 0\nFROM Bookings b\nINNER JOIN Activities a\nON b.ActivityID = a.ID\n\nYou need to run these queries several times:
\n-- Copy all rows with duration > 1 and set the flag to 1\ninsert into results(year, month, day, hour, duration, court, Flag)\nselect year, month, day, hour+1, duration-1, court, 1\nfrom result\nwhere duration > 1\n;\n-- Set the duration of all copied rows to 1\nupdate result\nset duration = 1\nwhere flag = 0 and duration > 1\n;\n-- Prepare the copies for the next round\nupdate result\nset flag = 0\nwhere flag = 1\n\nThis will create an additional entry for each duration > 1. My guess is that you can't allocate a court for more than 8 hours, so you just need to run these three 8 times to fix all of them.
It's not trivial. First, you need another column "Flag" which is 0:
INSERT INTO Results (year, month, day, hour, duration, court, Flag)
SELECT DATEPART (yy, b.StartDateTime),
DATEPART (mm, b.StartDateTime),
DATEPART (dd, b.StartDateTime),
DATEPART (hh, b.StartDateTime),
a.Duration,
a.Court,
0
FROM Bookings b
INNER JOIN Activities a
ON b.ActivityID = a.ID
You need to run these queries several times:
-- Copy all rows with duration > 1 and set the flag to 1
insert into results(year, month, day, hour, duration, court, Flag)
select year, month, day, hour+1, duration-1, court, 1
from result
where duration > 1
;
-- Set the duration of all copied rows to 1
update result
set duration = 1
where flag = 0 and duration > 1
;
-- Prepare the copies for the next round
update result
set flag = 0
where flag = 1
This will create an additional entry for each duration > 1. My guess is that you can't allocate a court for more than 8 hours, so you just need to run these three 8 times to fix all of them.
The DISTINCT placed in a subquery should work:
SQL> INSERT INTO meeting\n 2 SELECT seq.nextval, meeting_desc, meeting_date\n 3 FROM (SELECT DISTINCT meeting_desc, meeting_date\n 4 FROM current_table);\n\n2 rows inserted\n\nOnce this is done, you would join this newly created table with the old table to associate the generated ids to the children tables:
\nSQL> INSERT INTO topic\n 2 SELECT m.id, topic_seq.NEXTVAL, ct.topic_desc\n 3 FROM current_table ct\n 4 JOIN meeting m ON (ct.meeting_desc = m.meeting_desc \n 5 AND ct.meeting_date = m.meeting_date);\n\n5 rows inserted\n\n
soup wrap:
The DISTINCT placed in a subquery should work:
SQL> INSERT INTO meeting
2 SELECT seq.nextval, meeting_desc, meeting_date
3 FROM (SELECT DISTINCT meeting_desc, meeting_date
4 FROM current_table);
2 rows inserted
Once this is done, you would join this newly created table with the old table to associate the generated ids to the children tables:
SQL> INSERT INTO topic
2 SELECT m.id, topic_seq.NEXTVAL, ct.topic_desc
3 FROM current_table ct
4 JOIN meeting m ON (ct.meeting_desc = m.meeting_desc
5 AND ct.meeting_date = m.meeting_date);
5 rows inserted
qid & accept id:
(1344697, 1344756)
query:
How can I make a stored procedure return a "dataset" using a parameter I pass?
soup:
To fill a dataset from a stored procedure you would have code like below:
\nSqlConnection mySqlConnection =new SqlConnection("server=(local);database=MyDatabase;Integrated Security=SSPI;");\n\n SqlCommand mySqlCommand = mySqlConnection.CreateCommand();\n mySqlCommand.CommandText = "IDCategory";\n mySqlCommand.CommandType = CommandType.StoredProcedure;\n mySqlCommand.Parameters.Add("@IDCategory", SqlDbType.Int).Value = 5;\n\n SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter();\n mySqlDataAdapter.SelectCommand = mySqlCommand;\n DataSet myDataSet = new DataSet();\n mySqlConnection.Open();\n mySqlDataAdapter.Fill(myDataSet);\n\nYour connection string will be different and there are a few different ways to do this but this should get you going.... Once you get a few of these under your belt take a look at the Using Statement. It helps clean up the resources and requires a few less lines of code. This assumes a Stored Procedure name IDCategory with one Parameter called the same. It may be a little different in your setup.
\nYour stored procedure in this case will look something like:
\nCREATE PROC [dbo].[IDCategory] \n @IDCategory int\nAS \n SELECT IDListing, IDCategory, Price, Seller, Image\n FROM whateveryourtableisnamed\n WHERE IDCategory = @IDCategory\n\nHere's a link on Stored Procedure basics:\nhttp://www.sql-server-performance.com/articles/dba/stored_procedures_basics_p1.aspx
\nHere's a link on DataSets and other items with ADO.Net:\nhttp://authors.aspalliance.com/quickstart/howto/doc/adoplus/adoplusoverview.aspx
\n soup wrap:To fill a dataset from a stored procedure you would have code like below:
SqlConnection mySqlConnection =new SqlConnection("server=(local);database=MyDatabase;Integrated Security=SSPI;");
SqlCommand mySqlCommand = mySqlConnection.CreateCommand();
mySqlCommand.CommandText = "IDCategory";
mySqlCommand.CommandType = CommandType.StoredProcedure;
mySqlCommand.Parameters.Add("@IDCategory", SqlDbType.Int).Value = 5;
SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter();
mySqlDataAdapter.SelectCommand = mySqlCommand;
DataSet myDataSet = new DataSet();
mySqlConnection.Open();
mySqlDataAdapter.Fill(myDataSet);
Your connection string will be different and there are a few different ways to do this but this should get you going.... Once you get a few of these under your belt take a look at the Using Statement. It helps clean up the resources and requires a few less lines of code. This assumes a Stored Procedure name IDCategory with one Parameter called the same. It may be a little different in your setup.
Your stored procedure in this case will look something like:
CREATE PROC [dbo].[IDCategory]
@IDCategory int
AS
SELECT IDListing, IDCategory, Price, Seller, Image
FROM whateveryourtableisnamed
WHERE IDCategory = @IDCategory
Here's a link on Stored Procedure basics: http://www.sql-server-performance.com/articles/dba/stored_procedures_basics_p1.aspx
Here's a link on DataSets and other items with ADO.Net: http://authors.aspalliance.com/quickstart/howto/doc/adoplus/adoplusoverview.aspx
qid & accept id: (1362148, 1362166) query: How to insert into a table with just one IDENTITY column (SQL Express) soup: INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES\n\nThis works just fine in my case. How are you trying to get those rows into the database? SQL Server Mgmt Studio? SQL query from .NET app?
\nRunning inside Visual Studio in the "New Query" window, I get:
\n\n\nThe DEFAULT VALUES SQL construct or\n statement is not supported.
\n
==> OK, so Visual Studio can't handle it - that's not the fault of SQL Server, but of Visual Studio. Use the real SQL Management Studio instead - it works just fine there!
\nUsing ADO.NET also works like a charm:
\nusing(SqlConnection _con = new SqlConnection("server=(local);\n database=test;integrated security=SSPI;"))\n{\n using(SqlCommand _cmd = new SqlCommand\n ("INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES", _con))\n {\n _con.Open();\n _cmd.ExecuteNonQuery();\n _con.Close();\n }\n} \n\nSeems to be a limitation of VS - don't use VS for serious DB work :-)\nMarc
\n soup wrap: INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES
This works just fine in my case. How are you trying to get those rows into the database? SQL Server Mgmt Studio? SQL query from .NET app?
Running inside Visual Studio in the "New Query" window, I get:
The DEFAULT VALUES SQL construct or statement is not supported.
==> OK, so Visual Studio can't handle it - that's not the fault of SQL Server, but of Visual Studio. Use the real SQL Management Studio instead - it works just fine there!
Using ADO.NET also works like a charm:
using(SqlConnection _con = new SqlConnection("server=(local);
database=test;integrated security=SSPI;"))
{
using(SqlCommand _cmd = new SqlCommand
("INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES", _con))
{
_con.Open();
_cmd.ExecuteNonQuery();
_con.Close();
}
}
Seems to be a limitation of VS - don't use VS for serious DB work :-) Marc
qid & accept id: (1410216, 1411458) query: DB2 SQL add rows based on other rows soup:DrJokepu solution is ok, but that depends if what you call "Changes" in your question, is fixed. I.e.: are you always going to change +1 for the 2nd column? Or are those changes "dynamic" in a way you have to decide upon runtime which changes you're going to apply?
\nThere are in DB2 and any other SQL different constructs (like the insert into in DB2) or SELECT INTO for MS-SQL that will allow you to construct a set of queries.
\nIf I am not mistaken, you want to do this:
\nOr maybe you just want to do number 2.
\nNumber 1 is easy, as Dr.Jokepu already showed you:
\nINSERT INTO (values) SELECT "values" FROM ;\n\nNumber 2 you can always do in the same query, adding the changes as you select:
\nINSERT INTO MDSTD.MBANK ( MID, MAGN, MAAID, MTYPEOT, MAVAILS, MUSER, MTS)\nSELECT \n MID \n ,MAGN + 1\n ,0 as MAAID\n ,MTYPEOT\n ,'A' as MAVAILS\n ,MUSER\n ,GETDATE() \nFROM mdstd.mbank \nWHERE MTYPEOT = '2' and MAVAILS = 'A'\n
\n(note the GETDATE() is a MS-SQL function, I don't remember the exact function for DB/2 at this moment).
\nOne question remains, in your example you mentioned:
\n"New = A Old = O"
\nIf Old changes to "O", then you really want to change the original row? the answer to this question depends upon the exact task you want to accomplish, which still isn't clear for me.
\nIf you want to duplicate the rows and change the "copies" or copy them and change both sets (old and new) but using different rules.
\nUPDATE\nAfter rereading your post I understand you want to do this:
\n\n- Duplicate a set of records (effectively copying them) but modifying their values.
\n- Modify the original set of records before you duplicated them
\n
\nIf that is the case, I don't think you can do it in "two" queries, because you'll have no way to know what is the old row and what is the new one if you have already duplicated.
\nA valid option is to create a temporary table, copy the rows there (modify them as the "new ones) with the query I've provided). Then in the original table execute an "update" (using the same WHERE CLAUSE to make sure you're modifying the same rows), update the "old" values with whatever you want to update and finally insert the new ones back into the original table (what we called "new") that are already modified.\nFinally, drop the temp table.
\nPhew!
\nSounds weird, but unless we're talking about zillions of records every minute, this ought to be a kind of fast operation.
\n
soup wrap:
DrJokepu solution is ok, but that depends if what you call "Changes" in your question, is fixed. I.e.: are you always going to change +1 for the 2nd column? Or are those changes "dynamic" in a way you have to decide upon runtime which changes you're going to apply?
There are in DB2 and any other SQL different constructs (like the insert into in DB2) or SELECT INTO for MS-SQL that will allow you to construct a set of queries.
If I am not mistaken, you want to do this:
- Insert some values into a table that come from a select (what you call "old")
- Create another set of records (like the "old" ones) but modify their values.
Or maybe you just want to do number 2.
Number 1 is easy, as Dr.Jokepu already showed you:
INSERT INTO (values) SELECT "values" FROM ;
Number 2 you can always do in the same query, adding the changes as you select:
INSERT INTO MDSTD.MBANK ( MID, MAGN, MAAID, MTYPEOT, MAVAILS, MUSER, MTS)
SELECT
MID
,MAGN + 1
,0 as MAAID
,MTYPEOT
,'A' as MAVAILS
,MUSER
,GETDATE()
FROM mdstd.mbank
WHERE MTYPEOT = '2' and MAVAILS = 'A'
(note the GETDATE() is a MS-SQL function, I don't remember the exact function for DB/2 at this moment).
One question remains, in your example you mentioned:
"New = A Old = O"
If Old changes to "O", then you really want to change the original row? the answer to this question depends upon the exact task you want to accomplish, which still isn't clear for me.
If you want to duplicate the rows and change the "copies" or copy them and change both sets (old and new) but using different rules.
UPDATE
After rereading your post I understand you want to do this:
- Duplicate a set of records (effectively copying them) but modifying their values.
- Modify the original set of records before you duplicated them
If that is the case, I don't think you can do it in "two" queries, because you'll have no way to know what is the old row and what is the new one if you have already duplicated.
A valid option is to create a temporary table, copy the rows there (modify them as the "new ones) with the query I've provided). Then in the original table execute an "update" (using the same WHERE CLAUSE to make sure you're modifying the same rows), update the "old" values with whatever you want to update and finally insert the new ones back into the original table (what we called "new") that are already modified.
Finally, drop the temp table.
Phew!
Sounds weird, but unless we're talking about zillions of records every minute, this ought to be a kind of fast operation.
qid & accept id:
(1421404, 1421486)
query:
Find out which tables were affected by Triggers
soup:
Show the cascades and constraints:
\nmysql> SHOW CREATE TABLE tablename;\n
\nShow triggers:
\nmysql> USE dbname;\nmysql> show triggers;\n
\n
soup wrap:
Show the cascades and constraints:
mysql> SHOW CREATE TABLE tablename;
Show triggers:
mysql> USE dbname;
mysql> show triggers;
qid & accept id:
(1479831, 1479840)
query:
Using ranking-function derived column in where clause (SQL Server 2008)
soup:
You must move the WHERE operator above the project list where RowNumber column is created. Use a derived table or a CTE:
\nSELECT * \n FROM (\n SELECT *, ROW_NUMBER() OVER (...) as RowNumber\n FROM ...) As ...\n WHERE RowNumber = ...\n
\nthe equivalent CTE is:
\nWITH cte AS (\nSELECT *, ROW_NUMBER() OVER (...) as RowNumber\n FROM ...)\nSELECT * FROM cte \nWHERE RowNumber = ... \n
\n
soup wrap:
You must move the WHERE operator above the project list where RowNumber column is created. Use a derived table or a CTE:
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER (...) as RowNumber
FROM ...) As ...
WHERE RowNumber = ...
the equivalent CTE is:
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (...) as RowNumber
FROM ...)
SELECT * FROM cte
WHERE RowNumber = ...
qid & accept id:
(1627604, 1627661)
query:
SQL Query including time calculation
soup:
I'm not sure it's specified in the SQL Standard, but most SQL implementations have some sort of function for determining intervals. It's really going to boil down to what flavor of SQL you're using.
\nIf you're working with Oracle/PLSQL:
\nSELECT NumToDSInterval(enddate- startdate, 'MINUTE') FROM MyTable\n
\nIn SQL Server/T-SQL:
\nSELECT DateDiff(n, startdate, enddate) FROM MyTable\n
\nIn MySQL:
\nSELECT SubTime(enddate, startdate) FROM MyTable;\n
\nI'm sure there's one for SQLite and PostGre and any other flavor as well.
\n
soup wrap:
I'm not sure it's specified in the SQL Standard, but most SQL implementations have some sort of function for determining intervals. It's really going to boil down to what flavor of SQL you're using.
If you're working with Oracle/PLSQL:
SELECT NumToDSInterval(enddate- startdate, 'MINUTE') FROM MyTable
In SQL Server/T-SQL:
SELECT DateDiff(n, startdate, enddate) FROM MyTable
In MySQL:
SELECT SubTime(enddate, startdate) FROM MyTable;
I'm sure there's one for SQLite and PostGre and any other flavor as well.
qid & accept id:
(1700110, 1700356)
query:
How do I select min/max dates from table 2 based on date in table 1 (without getting too much data from sums)
soup:
If the monthly table contains a single entry for each month, you can do simply this:
\nselect\n m.date as m1,\n m.other_field,\n min(d.date) as m2,\n max(d.date) as m3\nfrom monthly m\njoin daily d\n on month(d.date) = month(m.date)\n and year(d.date) = year(m.date)\ngroup by m.date, m.other_field\norder by m.date\n
\notherwise:
\nselect m1, sum(other_field), m2, m3\nfrom (\n select\n m.date as m1,\n m.other_field,\n min(d.date) as m2,\n max(d.date) as m3\n from monthly m\n join daily d\n on month(d.date) = month(m.date)\n and year(d.date) = year(m.date)\n group by m.date, m.other_field) A\ngroup by A.m1, A.m2, A.m3\norder by A.m1\n
\nUpdate from pax: Try as I might, I could not get the join solutions working properly - they all seemed to return the same wrong data as the original. In the end, I opted for a non-join solution since it worked and performance wasn't a big issue, since the tables typically have 24 rows (for monthly) and 700 rows (for daily). I'm editing this answer and accepting it since (1) it actually helped a great deal in getting the correct solution for me; and (2) I'm loathe to write my own answer and claim the glory for myself.
\nThanks for all your help. The following is what worked for me:
\nselect\n m.date as p1,\n m.grouping_field as p2,\n sum(m.aggregating_field) as p3,\n (select min(date) from daily\n where month(date) = month(m.date)\n and year(date) = year(m.date)) as p4,\n (select max(date) from daily\n where month(date) = month(m.date)\n and year(date) = year(m.date)) as p5\nfrom\n monthly m\ngroup by\n m.date, m.grouping_field\n
\nwhich gave me what I wanted:
\n P1 P2 P3 P4 P5\n---------- ---- ---- ---------- ----------\n2007-10-01 BoxA 12.3 2007-10-16 2007-10-30\n2007-10-01 BoxB 13.6 2007-10-16 2007-10-30\n2007-10-01 BoxC 7.4 2007-10-16 2007-10-30\n2007-11-01 BoxA 20.3 2007-11-01 2007-11-30\n2007-11-01 BoxB 24.2 2007-11-01 2007-11-30\n2007-11-01 BoxC 21.7 2007-11-01 2007-11-30\n2007-12-01 BoxA 6.9 2007-12-01 2007-12-15\n2007-12-01 BoxB 6.4 2007-12-01 2007-12-15\n2007-12-01 BoxC 6.9 2007-12-01 2007-12-15\n
\n
soup wrap:
If the monthly table contains a single entry for each month, you can do simply this:
select
m.date as m1,
m.other_field,
min(d.date) as m2,
max(d.date) as m3
from monthly m
join daily d
on month(d.date) = month(m.date)
and year(d.date) = year(m.date)
group by m.date, m.other_field
order by m.date
otherwise:
select m1, sum(other_field), m2, m3
from (
select
m.date as m1,
m.other_field,
min(d.date) as m2,
max(d.date) as m3
from monthly m
join daily d
on month(d.date) = month(m.date)
and year(d.date) = year(m.date)
group by m.date, m.other_field) A
group by A.m1, A.m2, A.m3
order by A.m1
Update from pax: Try as I might, I could not get the join solutions working properly - they all seemed to return the same wrong data as the original. In the end, I opted for a non-join solution since it worked and performance wasn't a big issue, since the tables typically have 24 rows (for monthly) and 700 rows (for daily). I'm editing this answer and accepting it since (1) it actually helped a great deal in getting the correct solution for me; and (2) I'm loathe to write my own answer and claim the glory for myself.
Thanks for all your help. The following is what worked for me:
select
m.date as p1,
m.grouping_field as p2,
sum(m.aggregating_field) as p3,
(select min(date) from daily
where month(date) = month(m.date)
and year(date) = year(m.date)) as p4,
(select max(date) from daily
where month(date) = month(m.date)
and year(date) = year(m.date)) as p5
from
monthly m
group by
m.date, m.grouping_field
which gave me what I wanted:
P1 P2 P3 P4 P5
---------- ---- ---- ---------- ----------
2007-10-01 BoxA 12.3 2007-10-16 2007-10-30
2007-10-01 BoxB 13.6 2007-10-16 2007-10-30
2007-10-01 BoxC 7.4 2007-10-16 2007-10-30
2007-11-01 BoxA 20.3 2007-11-01 2007-11-30
2007-11-01 BoxB 24.2 2007-11-01 2007-11-30
2007-11-01 BoxC 21.7 2007-11-01 2007-11-30
2007-12-01 BoxA 6.9 2007-12-01 2007-12-15
2007-12-01 BoxB 6.4 2007-12-01 2007-12-15
2007-12-01 BoxC 6.9 2007-12-01 2007-12-15
qid & accept id:
(1712077, 1713063)
query:
Wipe data from Oracle DB
soup:
The easiest way would be to drop the schema the objects are associated to:
\nDROP USER [schema name] CASCADE\n
\nNuke it from orbit - it's the only way to be sure ;)
\nFor the script you provided, you could instead run those queries without having to generate the intermediate script using the following anonymous procedure:
\nBEGIN\n\n --Bye Views!\n FOR i IN (SELECT uv.view_name\n FROM USER_VIEWS uv) LOOP\n EXECUTE IMMEDIATE 'drop view '|| i.view_name ||'';\n END LOOP;\n\n --Bye Sequences!\n FOR i IN (SELECT us.sequence_name\n FROM USER_SEQUENCES us) LOOP\n EXECUTE IMMEDIATE 'drop sequence '|| i.sequence_name ||'';\n END LOOP;\n\n --Bye Tables!\n FOR i IN (SELECT ut.table_name\n FROM USER_TABLES ut) LOOP\n EXECUTE IMMEDIATE 'drop table '|| i.table_name ||' CASCADE CONSTRAINTS ';\n END LOOP;\n\n --Bye Procedures/Functions/Packages!\n FOR i IN (SELECT us.name,\n us.type\n FROM USER_SOURCE us\n WHERE us.type IN ('PROCEDURE', 'FUNCTION', 'PACKAGE')\n GROUP BY us.name, us.type) LOOP\n EXECUTE IMMEDIATE 'drop '|| i.type ||' '|| i.name ||'';\n END LOOP;\n\n --Bye Synonyms!\n FOR i IN (SELECT ut.synonym_name\n FROM USER_SYNONYMS us\n WHERE us.synonym_name NOT LIKE 'sta%' \n AND us.synonym_name LIKE 's_%') LOOP\n EXECUTE IMMEDIATE 'drop synonym '|| i.synonym_name ||'';\n END LOOP;\n\nEND;\n
\n
soup wrap:
The easiest way would be to drop the schema the objects are associated to:
DROP USER [schema name] CASCADE
Nuke it from orbit - it's the only way to be sure ;)
For the script you provided, you could instead run those queries without having to generate the intermediate script using the following anonymous procedure:
BEGIN
--Bye Views!
FOR i IN (SELECT uv.view_name
FROM USER_VIEWS uv) LOOP
EXECUTE IMMEDIATE 'drop view '|| i.view_name ||'';
END LOOP;
--Bye Sequences!
FOR i IN (SELECT us.sequence_name
FROM USER_SEQUENCES us) LOOP
EXECUTE IMMEDIATE 'drop sequence '|| i.sequence_name ||'';
END LOOP;
--Bye Tables!
FOR i IN (SELECT ut.table_name
FROM USER_TABLES ut) LOOP
EXECUTE IMMEDIATE 'drop table '|| i.table_name ||' CASCADE CONSTRAINTS ';
END LOOP;
--Bye Procedures/Functions/Packages!
FOR i IN (SELECT us.name,
us.type
FROM USER_SOURCE us
WHERE us.type IN ('PROCEDURE', 'FUNCTION', 'PACKAGE')
GROUP BY us.name, us.type) LOOP
EXECUTE IMMEDIATE 'drop '|| i.type ||' '|| i.name ||'';
END LOOP;
--Bye Synonyms!
FOR i IN (SELECT ut.synonym_name
FROM USER_SYNONYMS us
WHERE us.synonym_name NOT LIKE 'sta%'
AND us.synonym_name LIKE 's_%') LOOP
EXECUTE IMMEDIATE 'drop synonym '|| i.synonym_name ||'';
END LOOP;
END;
qid & accept id:
(1742507, 1742528)
query:
AUTO-Parametrized multiple SELECT
soup:
Yes, absolutely. For example:
\nselect cnt, count(*) from\n( select department_id, count(*) as cnt\n from employees\n group by department_id\n)\ngroup by cnt;\n
\nThis gives the "count of counts".
\nOr perhaps you mean something more like this, which is also valid:
\nselect emp_name\nfrom employees\nwhere department_id in\n( select department_id\n from departments\n where location_id in\n ( select location_id from locations\n where country = 'US'\n )\n);\n
\n
soup wrap:
Yes, absolutely. For example:
select cnt, count(*) from
( select department_id, count(*) as cnt
from employees
group by department_id
)
group by cnt;
This gives the "count of counts".
Or perhaps you mean something more like this, which is also valid:
select emp_name
from employees
where department_id in
( select department_id
from departments
where location_id in
( select location_id from locations
where country = 'US'
)
);
qid & accept id:
(1747745, 1748115)
query:
How to put a constraint on two combined fields?
soup:
One possibility would be to hold a computed column on table1 i.e.
\nfieldx = (field1 || field2)\n
\nI don't know if DB2 supports computed (aka virtual) columns as such, but if not you can create a regular column and maintain it via a trigger. The create the foreign key constraint:
\nALTER TABLE table1\n ADD CONSTRAINT foo FOREIGN KEY (fieldx) REFERENCES table2 (fieldx);\n
\nAnother possibility, of course, would be to modify your table design so that the keys are held consistently: if field1 and field2 are atomic values, then they should appear as such in table2, not as a concatenated value (which more or less breaks 1NF).
\n
soup wrap:
One possibility would be to hold a computed column on table1 i.e.
fieldx = (field1 || field2)
I don't know if DB2 supports computed (aka virtual) columns as such, but if not you can create a regular column and maintain it via a trigger. The create the foreign key constraint:
ALTER TABLE table1
ADD CONSTRAINT foo FOREIGN KEY (fieldx) REFERENCES table2 (fieldx);
Another possibility, of course, would be to modify your table design so that the keys are held consistently: if field1 and field2 are atomic values, then they should appear as such in table2, not as a concatenated value (which more or less breaks 1NF).
qid & accept id:
(1773534, 1773691)
query:
What is the right way to call an Oracle stored function from ado.net and get the result?
soup:
I'll assume you are using ODP.net (native Oracle client for .net).
\nLet's say you have 2 Oracle stored functions like this:
\n FUNCTION my_func\n (\n p_parm1 VARCHAR2\n , p_parm2 NUMBER\n ) RETURN VARCHAR2\n AS\n BEGIN\n RETURN p_parm1 || to_char(p_parm2);\n END;\n\n FUNCTION my_func2 RETURN SYS_REFCURSOR\n AS\n v_cursor SYS_REFCURSOR;\n BEGIN\n OPEN v_cursor FOR\n SELECT 'hello there Sean' col1\n FROM dual\n UNION ALL\n SELECT 'here is your answer' col1\n FROM dual; \n RETURN v_cursor; \n END;\n
\nOne of the functions returns a VARCHAR2 and the other returns ref cursor. On VB side, you could do this:
\nDim con As New OracleConnection("Data Source=xe;User Id=sandbox;Password=sandbox; Promotable Transaction=local")\n\nTry\n con.Open()\n Dim cmd As OracleCommand = con.CreateCommand()\n cmd.CommandText = "test_pkg.my_func"\n cmd.CommandType = CommandType.StoredProcedure\n\n Dim parm As OracleParameter\n\n parm = New OracleParameter()\n parm.Direction = ParameterDirection.ReturnValue\n parm.OracleDbType = OracleDbType.Varchar2\n parm.Size = 5000\n cmd.Parameters.Add(parm)\n\n parm = New OracleParameter()\n parm.Direction = ParameterDirection.Input\n parm.Value = "abc"\n parm.OracleDbType = OracleDbType.Varchar2\n cmd.Parameters.Add(parm)\n\n parm = New OracleParameter()\n parm.Direction = ParameterDirection.Input\n parm.Value = 42\n parm.OracleDbType = OracleDbType.Int32\n cmd.Parameters.Add(parm)\n\n cmd.ExecuteNonQuery()\n Console.WriteLine("result of first function is " + cmd.Parameters(0).Value)\n\n '''''''''''''''''''''''''''''''''''''''''''''\n ' now for the second query\n '''''''''''''''''''''''''''''''''''''''''''''\n cmd = con.CreateCommand()\n cmd.CommandText = "test_pkg.my_func2"\n cmd.CommandType = CommandType.StoredProcedure\n\n parm = New OracleParameter()\n parm.Direction = ParameterDirection.ReturnValue\n parm.OracleDbType = OracleDbType.RefCursor\n cmd.Parameters.Add(parm)\n\n Dim dr As OracleDataReader = cmd.ExecuteReader()\n While (dr.Read())\n Console.WriteLine(dr(0))\n End While\n\nFinally\n If (Not (con Is Nothing)) Then\n con.Close()\n End If\nEnd Try\n
\n
soup wrap:
I'll assume you are using ODP.net (native Oracle client for .net).
Let's say you have 2 Oracle stored functions like this:
FUNCTION my_func
(
p_parm1 VARCHAR2
, p_parm2 NUMBER
) RETURN VARCHAR2
AS
BEGIN
RETURN p_parm1 || to_char(p_parm2);
END;
FUNCTION my_func2 RETURN SYS_REFCURSOR
AS
v_cursor SYS_REFCURSOR;
BEGIN
OPEN v_cursor FOR
SELECT 'hello there Sean' col1
FROM dual
UNION ALL
SELECT 'here is your answer' col1
FROM dual;
RETURN v_cursor;
END;
One of the functions returns a VARCHAR2 and the other returns ref cursor. On VB side, you could do this:
Dim con As New OracleConnection("Data Source=xe;User Id=sandbox;Password=sandbox; Promotable Transaction=local")
Try
con.Open()
Dim cmd As OracleCommand = con.CreateCommand()
cmd.CommandText = "test_pkg.my_func"
cmd.CommandType = CommandType.StoredProcedure
Dim parm As OracleParameter
parm = New OracleParameter()
parm.Direction = ParameterDirection.ReturnValue
parm.OracleDbType = OracleDbType.Varchar2
parm.Size = 5000
cmd.Parameters.Add(parm)
parm = New OracleParameter()
parm.Direction = ParameterDirection.Input
parm.Value = "abc"
parm.OracleDbType = OracleDbType.Varchar2
cmd.Parameters.Add(parm)
parm = New OracleParameter()
parm.Direction = ParameterDirection.Input
parm.Value = 42
parm.OracleDbType = OracleDbType.Int32
cmd.Parameters.Add(parm)
cmd.ExecuteNonQuery()
Console.WriteLine("result of first function is " + cmd.Parameters(0).Value)
'''''''''''''''''''''''''''''''''''''''''''''
' now for the second query
'''''''''''''''''''''''''''''''''''''''''''''
cmd = con.CreateCommand()
cmd.CommandText = "test_pkg.my_func2"
cmd.CommandType = CommandType.StoredProcedure
parm = New OracleParameter()
parm.Direction = ParameterDirection.ReturnValue
parm.OracleDbType = OracleDbType.RefCursor
cmd.Parameters.Add(parm)
Dim dr As OracleDataReader = cmd.ExecuteReader()
While (dr.Read())
Console.WriteLine(dr(0))
End While
Finally
If (Not (con Is Nothing)) Then
con.Close()
End If
End Try
qid & accept id:
(1784283, 1784364)
query:
SQL Server 2005/2008 Group By statement with parameters without using dynamic SQL?
soup:
You can group on a constant which might be useful
\nSELECT\n SUM(Column0),\n CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END AS MyGrouping\nFROM\n Table1\nGROUP BY\n CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END\n
\nEdit: For datatype mismatch and multiple values and this allows you to group on both columns...
\nSELECT\n SUM(Column0),\n CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END AS Column1,\n CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END AS Column2\nFROM\n Table1\nGROUP BY\n CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END,\n CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END\n
\n
soup wrap:
You can group on a constant which might be useful
SELECT
SUM(Column0),
CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END AS MyGrouping
FROM
Table1
GROUP BY
CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END
Edit: For datatype mismatch and multiple values and this allows you to group on both columns...
SELECT
SUM(Column0),
CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END AS Column1,
CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END AS Column2
FROM
Table1
GROUP BY
CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END,
CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END
qid & accept id:
(1785942, 1786090)
query:
How can I use check constraint in sql server 2005
soup:
There is quite a wealth of information in the SQL Server documentation on this, but the two statements to create the check constraints you ask for are:
\nALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname between 1 and 5);\n\nALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname in (1,2,4));\n
\nThe condition of a check constraint can include:
\n\nA list of constant expressions introduced with in
\nA range of constant expressions introduced with between
\nA set of conditions introduced with like, which may contain wildcard characters
\n
\nThis allows you to have conditions like:
\n(colname >= 1 AND colname <= 5)\n
\n
soup wrap:
There is quite a wealth of information in the SQL Server documentation on this, but the two statements to create the check constraints you ask for are:
ALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname between 1 and 5);
ALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname in (1,2,4));
The condition of a check constraint can include:
A list of constant expressions introduced with in
A range of constant expressions introduced with between
A set of conditions introduced with like, which may contain wildcard characters
This allows you to have conditions like:
(colname >= 1 AND colname <= 5)
qid & accept id:
(1809787, 1809981)
query:
Oracle: How do I determine the NEW name of an object in an "AFTER ALTER" trigger?
soup:
ALTER RENAME won't fire the trigger, RENAME x TO y will.
\nAs for your question about names before and after, I think you will have to parse the DDL to retrieve them, like that:
\nCREATE OR REPLACE TRIGGER MK_BEFORE_RENAME BEFORE RENAME ON SCHEMA \nDECLARE \n sql_text ora_name_list_t;\n v_stmt VARCHAR2(2000);\n n PLS_INTEGER; \nBEGIN \n n := ora_sql_txt(sql_text);\n FOR i IN 1..n LOOP\n v_stmt := v_stmt || sql_text(i);\n END LOOP;\n\n Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'rename[[:space:]]+([a-z0-9_]+)[[:space:]]+to.*', '\1', 1, 1, 'i' ) );\n Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'rename[[:space:]]+.*[[:space:]]+to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );\nEND;\n
\nThe regular expressions could surely be written more clearly, but it works:
\nRENAME \nmktestx\nTO mktesty;\n\nBefore: mktestx\nAfter: mktesty\n
\nUPDATE To accommodate your changed question:
\nCREATE OR REPLACE TRIGGER MK_AFTER_ALTER AFTER ALTER ON SCHEMA \nDECLARE \n sql_text ora_name_list_t;\n v_stmt VARCHAR2(2000);\n n PLS_INTEGER; \nBEGIN \n n := ora_sql_txt(sql_text);\n FOR i IN 1..n LOOP\n v_stmt := v_stmt || sql_text(i);\n END LOOP;\n\n Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+([a-z0-9_]+)[[:space:]]+rename[[:space:]]+to.*', '\1', 1, 1, 'i' ) );\n Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+.*to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );\nEND;\n
\n
soup wrap:
ALTER RENAME won't fire the trigger, RENAME x TO y will.
As for your question about names before and after, I think you will have to parse the DDL to retrieve them, like that:
CREATE OR REPLACE TRIGGER MK_BEFORE_RENAME BEFORE RENAME ON SCHEMA
DECLARE
sql_text ora_name_list_t;
v_stmt VARCHAR2(2000);
n PLS_INTEGER;
BEGIN
n := ora_sql_txt(sql_text);
FOR i IN 1..n LOOP
v_stmt := v_stmt || sql_text(i);
END LOOP;
Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'rename[[:space:]]+([a-z0-9_]+)[[:space:]]+to.*', '\1', 1, 1, 'i' ) );
Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'rename[[:space:]]+.*[[:space:]]+to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );
END;
The regular expressions could surely be written more clearly, but it works:
RENAME
mktestx
TO mktesty;
Before: mktestx
After: mktesty
UPDATE To accommodate your changed question:
CREATE OR REPLACE TRIGGER MK_AFTER_ALTER AFTER ALTER ON SCHEMA
DECLARE
sql_text ora_name_list_t;
v_stmt VARCHAR2(2000);
n PLS_INTEGER;
BEGIN
n := ora_sql_txt(sql_text);
FOR i IN 1..n LOOP
v_stmt := v_stmt || sql_text(i);
END LOOP;
Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+([a-z0-9_]+)[[:space:]]+rename[[:space:]]+to.*', '\1', 1, 1, 'i' ) );
Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+.*to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );
END;
qid & accept id:
(1822504, 1822671)
query:
Determine existence of results in jet SQL?
soup:
How about:
\nSELECT TOP 1 IIF(EXISTS(\n SELECT * FROM foo \n WHERE ), 0, 1) As f1 \nFROM foo\n
\nPerhaps more clearly:
\nSELECT TOP 1 IIF(EXISTS(\n SELECT * FROM foo\n WHERE ), 0, 1) As F1 \nFROM MSysObjects\n
\n
soup wrap:
How about:
SELECT TOP 1 IIF(EXISTS(
SELECT * FROM foo
WHERE ), 0, 1) As f1
FROM foo
Perhaps more clearly:
SELECT TOP 1 IIF(EXISTS(
SELECT * FROM foo
WHERE ), 0, 1) As F1
FROM MSysObjects
qid & accept id:
(1830015, 1830082)
query:
Boolean expressions for a tagging system in SQL
soup:
Assuming that data -> items, word -> name and tagged_item -> tagged_items.
\nThis is for "tag1 AND (tag2 OR tag3) AND NOT tag4 OR tag5". I'm sure you can figure out the rest.
\nSELECT items.* FROM items\n LEFT JOIN (SELECT i1.item_id FROM tagged_items AS i1 INNER JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1') AS ti1 ON items.id = ti1.item_id\n LEFT JOIN (SELECT i2.item_id FROM tagged_items AS i2 INNER JOIN tags AS t2 ON i2.tag_id = t2.id AND t2.name = 'tag2') AS ti2 ON items.id = ti2.item_id\n LEFT JOIN (SELECT i3.item_id FROM tagged_items AS i3 INNER JOIN tags AS t3 ON i3.tag_id = t3.id AND t3.name = 'tag3') AS ti3 ON items.id = ti3.item_id\n LEFT JOIN (SELECT i4.item_id FROM tagged_items AS i4 INNER JOIN tags AS t4 ON i4.tag_id = t4.id AND t4.name = 'tag4') AS ti4 ON items.id = ti4.item_id\n LEFT JOIN (SELECT i5.item_id FROM tagged_items AS i5 INNER JOIN tags AS t5 ON i5.tag_id = t5.id AND t5.name = 'tag5') AS ti5 ON items.id = ti5.item_id\nWHERE ti1.item_id IS NOT NULL AND (ti2.item_id IS NOT NULL OR ti3.item_id IS NOT NULL) AND ti4.item_id IS NULL OR ti5.item_id IS NOT NULL;\n
\nEdit:\nIf you want to avoid subqueries, you could do this:
\nSELECT items.* FROM items \n LEFT JOIN tagged_items AS i1 ON items.id = i1.item_id LEFT JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1'\n ...\nWHERE t1.item_id IS NOT NULL ...\n
\nI'm not sure why you'd want to do it though, as the additional left joins will likely result in a slower run.
\n
soup wrap:
Assuming that data -> items, word -> name and tagged_item -> tagged_items.
This is for "tag1 AND (tag2 OR tag3) AND NOT tag4 OR tag5". I'm sure you can figure out the rest.
SELECT items.* FROM items
LEFT JOIN (SELECT i1.item_id FROM tagged_items AS i1 INNER JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1') AS ti1 ON items.id = ti1.item_id
LEFT JOIN (SELECT i2.item_id FROM tagged_items AS i2 INNER JOIN tags AS t2 ON i2.tag_id = t2.id AND t2.name = 'tag2') AS ti2 ON items.id = ti2.item_id
LEFT JOIN (SELECT i3.item_id FROM tagged_items AS i3 INNER JOIN tags AS t3 ON i3.tag_id = t3.id AND t3.name = 'tag3') AS ti3 ON items.id = ti3.item_id
LEFT JOIN (SELECT i4.item_id FROM tagged_items AS i4 INNER JOIN tags AS t4 ON i4.tag_id = t4.id AND t4.name = 'tag4') AS ti4 ON items.id = ti4.item_id
LEFT JOIN (SELECT i5.item_id FROM tagged_items AS i5 INNER JOIN tags AS t5 ON i5.tag_id = t5.id AND t5.name = 'tag5') AS ti5 ON items.id = ti5.item_id
WHERE ti1.item_id IS NOT NULL AND (ti2.item_id IS NOT NULL OR ti3.item_id IS NOT NULL) AND ti4.item_id IS NULL OR ti5.item_id IS NOT NULL;
Edit:
If you want to avoid subqueries, you could do this:
SELECT items.* FROM items
LEFT JOIN tagged_items AS i1 ON items.id = i1.item_id LEFT JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1'
...
WHERE t1.item_id IS NOT NULL ...
I'm not sure why you'd want to do it though, as the additional left joins will likely result in a slower run.
qid & accept id:
(1853433, 1853967)
query:
SQL Server locks - avoid insertion of duplicate entries
soup:
To keep locks between multiple statements, they have to be wrapped in a transaction. In your example:
\nIf (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n INSERT INTO T3 SELECT -86,-86\n
\nThe update lock can be released before the insert is executed. This would work reliably:
\nbegin transaction\nIf (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n INSERT INTO T3 SELECT -86,-86\ncommit transaction\n
\nSingle statements are always wrapped in a transaction, so this would work too:
\n INSERT INTO T3 SELECT -86,-86\n WHERE NOT EXISTS (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n
\n(This is assuming you have "implicit transactions" turned off, like the default SQL Server setting.)
\n
soup wrap:
To keep locks between multiple statements, they have to be wrapped in a transaction. In your example:
If (SELECT 1 FROM t3 with (updlock) where t3.a=-86)
INSERT INTO T3 SELECT -86,-86
The update lock can be released before the insert is executed. This would work reliably:
begin transaction
If (SELECT 1 FROM t3 with (updlock) where t3.a=-86)
INSERT INTO T3 SELECT -86,-86
commit transaction
Single statements are always wrapped in a transaction, so this would work too:
INSERT INTO T3 SELECT -86,-86
WHERE NOT EXISTS (SELECT 1 FROM t3 with (updlock) where t3.a=-86)
(This is assuming you have "implicit transactions" turned off, like the default SQL Server setting.)
qid & accept id:
(1858559, 1860098)
query:
Search literal within a word
soup:
I think that should be better fetching the array of entries and then perform a text manipulation over the fetched data (in this case a search)!
\nBecause any text manipulation or complex query take more resources and if your database contains a lot of data, the query become too slow! Moreover, if you are running your \nquery on a shared server, that increases the performance issues!
\nYou can easily accomplish what you are trying to do with regex, once you have fetched the data from the database!
\n
\nUPDATE: My suggestion is the same even if you are running your script on a dedicated server! However, if you want to perform a full-text search of the word "literal" in BOOLEAN MODE like you have described, you can remove the + operator (because you are searching only one word) and construct the query as follow:
\nSELECT listOfColumsNames WHERE\nMATCH (colName) \nAGAINST ('literal*' IN BOOLEAN MODE);\n
\nHowever, even if you add the AND operator, your query works fine: tested on Apache Server with MySQL 5.1!
\nI suggest you to read the documentation about the full-text search in boolean mode.
\nThe only one problem of this query is that doesn't matches the word "literal" if it is a sub-string inside an other word, for example: "textliteraltext".\nAs you noticed, you can't use the * operator at the beginning of the word!
\nSo, to accomplish what you are trying to do, the fastest and easiest way is to follow the suggestion of Paul, using the % placeholder:
\nSELECT listOfColumsNames \nWHERE colName LIKE '%literal%';\n
\n
soup wrap:
I think that should be better fetching the array of entries and then perform a text manipulation over the fetched data (in this case a search)!
Because any text manipulation or complex query take more resources and if your database contains a lot of data, the query become too slow! Moreover, if you are running your
query on a shared server, that increases the performance issues!
You can easily accomplish what you are trying to do with regex, once you have fetched the data from the database!
UPDATE: My suggestion is the same even if you are running your script on a dedicated server! However, if you want to perform a full-text search of the word "literal" in BOOLEAN MODE like you have described, you can remove the + operator (because you are searching only one word) and construct the query as follow:
SELECT listOfColumsNames WHERE
MATCH (colName)
AGAINST ('literal*' IN BOOLEAN MODE);
However, even if you add the AND operator, your query works fine: tested on Apache Server with MySQL 5.1!
I suggest you to read the documentation about the full-text search in boolean mode.
The only one problem of this query is that doesn't matches the word "literal" if it is a sub-string inside an other word, for example: "textliteraltext".
As you noticed, you can't use the * operator at the beginning of the word!
So, to accomplish what you are trying to do, the fastest and easiest way is to follow the suggestion of Paul, using the % placeholder:
SELECT listOfColumsNames
WHERE colName LIKE '%literal%';
qid & accept id:
(1979522, 1979549)
query:
How to fetch an object graph at once?
soup:
A simple JOIN would do the trick:
\nSELECT o.*\n, i.*\nFROM orders o\nINNER JOIN order_items i\nON o.id = i.order_id\n
\nThe will return one row for each row in order_items. The returned rows consist of all fields from the orders table, and concatenated to that, all fields from the order_items table (quite literally, the records from the tables are joined, that is, they are combined by record concatenation)
\nSo if orders has (id, order_date, customer_id) and order_items has (order_id, product_id, price) the result of the statement above will consist of records with (id, order_date, customer_id, order_id, product_id, price)
\nOne thing you need to be aware of is that this approach breaks down whenever there are two distinct 'detail' tables for one 'master'. Let me explain.
\nIn the orders/order_items example, orders is the master and order_items is the detail: each row in order_items belongs to, or is dependent on exactly one row in orders. The reverse is not true: one row in the orders table can have zero or more related rows in the order_items table. The join condition
\nON o.id = i.order_id \n
\nensures that only related rows are combined and returned (leaving out the condition would retturn all possible combinations of rows from the two tables, assuming the database would allow you to omit the join condition)
\nNow, suppose you have one master with two details, for example, customers as master and customer_orders as detail1 and customer_phone_numbers. Suppose you want to retrieve a particular customer along with all is orders and all its phone numbers. You might be tempted to write:
\nSELECT c.*, o.*, p.*\nFROM customers c\nINNER JOIN customer_orders o\nON c.id = o.customer_id\nINNER JOIN customer_phone_numbers p\nON c.id = p.customer_id\n
\nThis is valid SQL, and it will execute (asuming the tables and column names are in place)\nBut the problem is, is that it will give you a rubbish result. Assuming you have on customer with two orders (1,2) and two phone numbers (A, B) you get these records:
\ncustomer-data | order 1 | phone A\ncustomer-data | order 2 | phone A\ncustomer-data | order 1 | phone B\ncustomer-data | order 2 | phone B\n
\nThis is rubbish, as it suggests there is some relationship between order 1 and phone numbers A and B and order 2 and phone numbers A and B.
\nWhat's worse is that these results can completely explode in numbers of records, much to the detriment of database performance.
\nSo, JOIN is excellent to "flatten" a hierarchy of items of known depth (customer -> orders -> order_items) into one big table which only duplicates the master items for each detail item. But it is awful to extract a true graph of related items. This is a direct consequence of the way SQL is designed - it can only output normalized tables without repeating groups. This is way object relational mappers exist, to allow object definitions that can have multiple dependent collections of subordinate objects to be stored and retrieved from a relational database without losing your sanity as a programmer.
\n
soup wrap:
A simple JOIN would do the trick:
SELECT o.*
, i.*
FROM orders o
INNER JOIN order_items i
ON o.id = i.order_id
The will return one row for each row in order_items. The returned rows consist of all fields from the orders table, and concatenated to that, all fields from the order_items table (quite literally, the records from the tables are joined, that is, they are combined by record concatenation)
So if orders has (id, order_date, customer_id) and order_items has (order_id, product_id, price) the result of the statement above will consist of records with (id, order_date, customer_id, order_id, product_id, price)
One thing you need to be aware of is that this approach breaks down whenever there are two distinct 'detail' tables for one 'master'. Let me explain.
In the orders/order_items example, orders is the master and order_items is the detail: each row in order_items belongs to, or is dependent on exactly one row in orders. The reverse is not true: one row in the orders table can have zero or more related rows in the order_items table. The join condition
ON o.id = i.order_id
ensures that only related rows are combined and returned (leaving out the condition would retturn all possible combinations of rows from the two tables, assuming the database would allow you to omit the join condition)
Now, suppose you have one master with two details, for example, customers as master and customer_orders as detail1 and customer_phone_numbers. Suppose you want to retrieve a particular customer along with all is orders and all its phone numbers. You might be tempted to write:
SELECT c.*, o.*, p.*
FROM customers c
INNER JOIN customer_orders o
ON c.id = o.customer_id
INNER JOIN customer_phone_numbers p
ON c.id = p.customer_id
This is valid SQL, and it will execute (asuming the tables and column names are in place)
But the problem is, is that it will give you a rubbish result. Assuming you have on customer with two orders (1,2) and two phone numbers (A, B) you get these records:
customer-data | order 1 | phone A
customer-data | order 2 | phone A
customer-data | order 1 | phone B
customer-data | order 2 | phone B
This is rubbish, as it suggests there is some relationship between order 1 and phone numbers A and B and order 2 and phone numbers A and B.
What's worse is that these results can completely explode in numbers of records, much to the detriment of database performance.
So, JOIN is excellent to "flatten" a hierarchy of items of known depth (customer -> orders -> order_items) into one big table which only duplicates the master items for each detail item. But it is awful to extract a true graph of related items. This is a direct consequence of the way SQL is designed - it can only output normalized tables without repeating groups. This is way object relational mappers exist, to allow object definitions that can have multiple dependent collections of subordinate objects to be stored and retrieved from a relational database without losing your sanity as a programmer.
qid & accept id:
(2044752, 2045014)
query:
SQL mapping between multiple tables
soup:
To expand on Arthur Thomas's solution here's a union without the WHERE in the subselects so that you can create a universal view:
\nSELECT A.Name as Animal, B.Name as Zoo FROM A, AtoB, B\n WHERE AtoB.A_ID = A.ID && B.ID = AtoB.B_ID \nUNION\nSELECT C.Name as Animal, B.Name as Zoo FROM C, CtoB, B\n WHERE CtoB.C_ID = C.ID && B.ID = CtoB.B_ID\n
\nThen, you can perform a query like:
\nSELECT Animal FROM zoo_animals WHERE Zoo="Seattle Zoo"\n
\n
soup wrap:
To expand on Arthur Thomas's solution here's a union without the WHERE in the subselects so that you can create a universal view:
SELECT A.Name as Animal, B.Name as Zoo FROM A, AtoB, B
WHERE AtoB.A_ID = A.ID && B.ID = AtoB.B_ID
UNION
SELECT C.Name as Animal, B.Name as Zoo FROM C, CtoB, B
WHERE CtoB.C_ID = C.ID && B.ID = CtoB.B_ID
Then, you can perform a query like:
SELECT Animal FROM zoo_animals WHERE Zoo="Seattle Zoo"
qid & accept id:
(2045053, 2045069)
query:
MYSQL - Retrieve Timestamps between dates
soup:
SELECT timestamp\nFROM tablename\nWHERE timestamp >= userStartDate\n AND timestamp < userEndDate + INTERVAL 1 DAY\n
\nThis will select every record having date portion between userStartDate and userEndDate, provided that these fields have type of DATE (without time portion).
\nIf the start and end dates come as strings, use STR_TO_DATE to convert from any given format:
\nSELECT timestamp\nFROM tablename\nWHERE timestamp >= STR_TO_DATE('01/11/2010', '%m/%d/%Y')\n AND timestamp < STR_TO_DATE('01/12/2010', '%m/%d/%Y') + INTERVAL 1 DAY\n
\n
soup wrap:
SELECT timestamp
FROM tablename
WHERE timestamp >= userStartDate
AND timestamp < userEndDate + INTERVAL 1 DAY
This will select every record having date portion between userStartDate and userEndDate, provided that these fields have type of DATE (without time portion).
If the start and end dates come as strings, use STR_TO_DATE to convert from any given format:
SELECT timestamp
FROM tablename
WHERE timestamp >= STR_TO_DATE('01/11/2010', '%m/%d/%Y')
AND timestamp < STR_TO_DATE('01/12/2010', '%m/%d/%Y') + INTERVAL 1 DAY
qid & accept id:
(2056938, 2056970)
query:
SQL isolate greatest values in a column
soup:
These queries both isolate the row with the highest xfer_id for each distinct client_plt_id
\nselect xfer_id, client_plt_id, xfer_doc_no\nfrom tab t1\nwhere xfer_id = (\n select max(xfer_id)\n from tab t2\n where t2.client_plt_id = t1.client_plt_id\n )\n
\nor, for mysql this may be better performing:
\nselect xfer_id, client_plt_id, xfer_doc_no\nfrom tab t1\ninner join (\n select max(xfer_id), client_plt_id\n from tab\n group by client_plt_id\n ) t2\non t1.client_plt_id = t2.client_plt_id\nand t1.xfer_id = t2.xfer_id\n
\nFor both these queries, you can simply add a WHERE clause to select on particualr client. Just append for example WHERE client_plt_id = 80016616.
\nIf you simply want the one row with the highest xfer_id, regardless of client_plt_id, this is what you need:
\nselect xfer_id, client_plt_id, xfer_doc_no\nfrom tab t1\nwhere xfer_id = (select max(xfer_id) from tab)\n
\n
soup wrap:
These queries both isolate the row with the highest xfer_id for each distinct client_plt_id
select xfer_id, client_plt_id, xfer_doc_no
from tab t1
where xfer_id = (
select max(xfer_id)
from tab t2
where t2.client_plt_id = t1.client_plt_id
)
or, for mysql this may be better performing:
select xfer_id, client_plt_id, xfer_doc_no
from tab t1
inner join (
select max(xfer_id), client_plt_id
from tab
group by client_plt_id
) t2
on t1.client_plt_id = t2.client_plt_id
and t1.xfer_id = t2.xfer_id
For both these queries, you can simply add a WHERE clause to select on particualr client. Just append for example WHERE client_plt_id = 80016616.
If you simply want the one row with the highest xfer_id, regardless of client_plt_id, this is what you need:
select xfer_id, client_plt_id, xfer_doc_no
from tab t1
where xfer_id = (select max(xfer_id) from tab)
qid & accept id:
(2169720, 2169764)
query:
Oracle: pivot (coalesce) some counts onto a single row?
soup:
What you're looking for is pivoting - transposing the row data into columnar.
\nOracle 9i+, Using WITH/CTE:
\n
\nUse:
\nWITH summary AS (\n SELECT TRUNC(ls.started,'HH') AS dt,\n ls.depot,\n COUNT(*) AS num_depot\n FROM logstats ls\n GROUP BY TRUNC(ls.started,'HH'), ls.depot)\n SELECT s.dt,\n MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",\n MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"\n FROM summary s\nGROUP BY s.dt\nORDER BY s.dt\n
\nNon-WITH/CTE Equivalent
\n
\nUse:
\n SELECT s.dt,\n MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",\n MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"\n FROM (SELECT TRUNC(ls.started,'HH') AS dt,\n ls.depot,\n COUNT(*) AS num_depot\n FROM LOGSTATS ls\n GROUP BY TRUNC(ls.started, 'HH'), ls.depot) s\nGROUP BY s.dt\nORDER BY s.dt\n
\nPre Oracle9i would need the CASE statements changed to DECODE, Oracle specific IF/ELSE logic.
\nOracle 11g+, Using PIVOT
\n
\nUntested:
\n SELECT * \n FROM (SELECT TRUNC(ls.started, 'HH') AS dt,\n ls.depot\n FROM LOGSTATS ls\n GROUP BY TRUNC(ls.started, 'HH'), ls.depot)\n PIVOT (\n COUNT(*) FOR depot\n )\nORDER BY 1\n
\n
soup wrap:
What you're looking for is pivoting - transposing the row data into columnar.
Oracle 9i+, Using WITH/CTE:
Use:
WITH summary AS (
SELECT TRUNC(ls.started,'HH') AS dt,
ls.depot,
COUNT(*) AS num_depot
FROM logstats ls
GROUP BY TRUNC(ls.started,'HH'), ls.depot)
SELECT s.dt,
MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",
MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"
FROM summary s
GROUP BY s.dt
ORDER BY s.dt
Non-WITH/CTE Equivalent
Use:
SELECT s.dt,
MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",
MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"
FROM (SELECT TRUNC(ls.started,'HH') AS dt,
ls.depot,
COUNT(*) AS num_depot
FROM LOGSTATS ls
GROUP BY TRUNC(ls.started, 'HH'), ls.depot) s
GROUP BY s.dt
ORDER BY s.dt
Pre Oracle9i would need the CASE statements changed to DECODE, Oracle specific IF/ELSE logic.
Oracle 11g+, Using PIVOT
Untested:
SELECT *
FROM (SELECT TRUNC(ls.started, 'HH') AS dt,
ls.depot
FROM LOGSTATS ls
GROUP BY TRUNC(ls.started, 'HH'), ls.depot)
PIVOT (
COUNT(*) FOR depot
)
ORDER BY 1
qid & accept id:
(2183107, 2184035)
query:
How to use foreign keys and a spatial index inside a MySQL table?
soup:
\nHow can we combine fast children search in tree and also have a SPATIAL INDEX in a table?
\n
\nCreate the indexes on id and parentId of your table manually:
\nCREATE INDEX ix_mytable_parentid ON mytable (parentid)\n
\nNote that since id is most probably a PRIMARY KEY, no explicit index is required on it (one will be created implicitly).
\nBTW, if you are having the natural geo-based hierarchy, what's the point of using parent-child relationships for searching?
\nYou can make the queries to use the SPATIAL indexes:
\nSELECT *\nFROM mytable m1\nJOIN mytable m2\nON MBRContains (m2.area, m1.area)\n AND m2.parentId = m1.id\nWHERE m1.name = 'London'\n
\nwhich will use the spatial index for searching and the relationship for fine filtering.
\n
soup wrap:
How can we combine fast children search in tree and also have a SPATIAL INDEX in a table?
Create the indexes on id and parentId of your table manually:
CREATE INDEX ix_mytable_parentid ON mytable (parentid)
Note that since id is most probably a PRIMARY KEY, no explicit index is required on it (one will be created implicitly).
BTW, if you are having the natural geo-based hierarchy, what's the point of using parent-child relationships for searching?
You can make the queries to use the SPATIAL indexes:
SELECT *
FROM mytable m1
JOIN mytable m2
ON MBRContains (m2.area, m1.area)
AND m2.parentId = m1.id
WHERE m1.name = 'London'
which will use the spatial index for searching and the relationship for fine filtering.
qid & accept id:
(2199315, 2199341)
query:
How to get Microsoft SQL MATH POWER to show as decimal and not as INT (which it seems to do)?
soup:
The precision is lost because your input values are all integers.
\nTry
\nSELECT POWER(( 1.0 + 3.0 / 100.0 ), ( 1.0 / 365.0 ))\n
\nIf this doesn't give sufficient precision, cast the inputs to POWER as floats:
\nSELECT POWER(( CAST(1.0 as float) + CAST(3.0 AS float) / 100.0 ), ( 1.0 / 365.0 ))\n
\n
soup wrap:
The precision is lost because your input values are all integers.
Try
SELECT POWER(( 1.0 + 3.0 / 100.0 ), ( 1.0 / 365.0 ))
If this doesn't give sufficient precision, cast the inputs to POWER as floats:
SELECT POWER(( CAST(1.0 as float) + CAST(3.0 AS float) / 100.0 ), ( 1.0 / 365.0 ))
qid & accept id:
(2289907, 2289947)
query:
Computing different sums depending on the value of one column
soup:
Here you can use a trick that boolean expressions evaluate to either 0 or 1 in SQL:
\nSELECT a2 + a8 + a7 * (a1 BETWEEN 0 AND 2) AS SUM\nFROM table_name\n
\nA more general (and more conventional) way is to use a CASE expression:
\nSELECT\n CASE WHEN a1 BETWEEN 0 AND 2\n THEN a2 + a7 + a8\n ELSE a2 + a8\n END AS SUM\nFROM table_name\n
\nYou can also do something like this to include a CASE expression without repeating the common terms:
\nSELECT\n a2 + a8 + (CASE WHEN a1 BETWEEN 0 AND 2 THEN a7 ELSE 0 END) AS SUM\nFROM table_name\n
\n
soup wrap:
Here you can use a trick that boolean expressions evaluate to either 0 or 1 in SQL:
SELECT a2 + a8 + a7 * (a1 BETWEEN 0 AND 2) AS SUM
FROM table_name
A more general (and more conventional) way is to use a CASE expression:
SELECT
CASE WHEN a1 BETWEEN 0 AND 2
THEN a2 + a7 + a8
ELSE a2 + a8
END AS SUM
FROM table_name
You can also do something like this to include a CASE expression without repeating the common terms:
SELECT
a2 + a8 + (CASE WHEN a1 BETWEEN 0 AND 2 THEN a7 ELSE 0 END) AS SUM
FROM table_name
qid & accept id:
(2318539, 2318693)
query:
Paging and custom-ordering a result
soup:
Wrap your unioned queries in another one as a derived table and you can use the top clause.
\nSELECT TOP 100 * FROM (\n SELECT * FROM table where field = 'entry'\n UNION ALL\n SELECT * FROM table where field = 'entry#'\n) sortedresults\n
\n
\nYou were on the right track then. Add a defined column to each of your subsets of sorted results and then you can use that to keep the order sorted.
\nWITH SearchResult AS\n (SELECT *, ROW_NUMBER() OVER (ORDER BY QueryNum) as RowNum FROM\n (SELECT *, 1 as QueryNum FROM KeywordTable WHERE field = 'Keyword'\n UNION ALL\n SELECT *, 2 from KeywordTable WHERE field = 'Keyword#'\n ) SortedResults\n )\nSELECT * from SearchResults WHERE RowNum BETWEEN 4 and 10\n
\nIt is important that you also sort each subquery by something other than keyword so their order stays the same between runs (and as a secondary sort on the row number function). Example: say you have k1, k2, k3, k4, k5 - if you select * where keyword like k% you might get k1, k2, k3, k4, k5 one time and k5, k4, k3, k2, k1 the next (SQL doesn't guarantee return order and it can differ). That will throw off your paging.
\n
soup wrap:
Wrap your unioned queries in another one as a derived table and you can use the top clause.
SELECT TOP 100 * FROM (
SELECT * FROM table where field = 'entry'
UNION ALL
SELECT * FROM table where field = 'entry#'
) sortedresults
You were on the right track then. Add a defined column to each of your subsets of sorted results and then you can use that to keep the order sorted.
WITH SearchResult AS
(SELECT *, ROW_NUMBER() OVER (ORDER BY QueryNum) as RowNum FROM
(SELECT *, 1 as QueryNum FROM KeywordTable WHERE field = 'Keyword'
UNION ALL
SELECT *, 2 from KeywordTable WHERE field = 'Keyword#'
) SortedResults
)
SELECT * from SearchResults WHERE RowNum BETWEEN 4 and 10
It is important that you also sort each subquery by something other than keyword so their order stays the same between runs (and as a secondary sort on the row number function). Example: say you have k1, k2, k3, k4, k5 - if you select * where keyword like k% you might get k1, k2, k3, k4, k5 one time and k5, k4, k3, k2, k1 the next (SQL doesn't guarantee return order and it can differ). That will throw off your paging.
qid & accept id:
(2355791, 2355996)
query:
Help with generating a report from data in a parent-children model
soup:
SQL 2000 Based solution
\nDECLARE @Stack TABLE (\n StackID INTEGER IDENTITY\n , Category VARCHAR(20)\n , RootID INTEGER\n , ChildID INTEGER\n , Visited BIT)\n\nINSERT INTO @Stack\nSELECT [Category] = c.category_name\n , [RootID] = c.category_id\n , [ChildID] = c.category_id\n , 0\nFROM Categories c\n\nWHILE EXISTS (SELECT * FROM @Stack WHERE Visited = 0)\nBEGIN\n DECLARE @StackID INTEGER\n SELECT @StackID = MAX(StackID) FROM @Stack\n\n INSERT INTO @Stack\n SELECT st.Category\n , st.RootID\n , c.category_id\n , 0\n FROM @Stack st\n INNER JOIN Categories c ON c.father_id = st.ChildID \n WHERE Visited = 0\n\n UPDATE @Stack\n SET Visited = 1\n WHERE StackID <= @StackID\nEND\n\nSELECT st.RootID\n , st.Category\n , COUNT(s.sales_id)\nFROM @Stack st\n INNER JOIN Sales s ON s.category_id = st.ChildID\nGROUP BY st.RootID, st.Category\nORDER BY st.RootID\n
\nSQL 2005 Based solution
\nA CTE should get you what you want
\n\n- Select each category from Categories to be the root item
\n- recursively add each child of every root item
\nINNER JOIN the results with your sales table. As every root is in the result of the CTE, a simple GROUP BY is sufficient to get a count for each item. \n
\nSQL Statement
\n;WITH QtyCTE AS (\n SELECT [Category] = c.category_name\n , [RootID] = c.category_id\n , [ChildID] = c.category_id\n FROM Categories c\n UNION ALL \n SELECT cte.Category\n , cte.RootID\n , c.category_id\n FROM QtyCTE cte\n INNER JOIN Categories c ON c.father_id = cte.ChildID\n)\nSELECT cte.RootID\n , cte.Category\n , COUNT(s.sales_id)\nFROM QtyCTE cte\n INNER JOIN Sales s ON s.category_id = cte.ChildID\nGROUP BY cte.RootID, cte.Category\nORDER BY cte.RootID\n
\n
soup wrap:
SQL 2000 Based solution
DECLARE @Stack TABLE (
StackID INTEGER IDENTITY
, Category VARCHAR(20)
, RootID INTEGER
, ChildID INTEGER
, Visited BIT)
INSERT INTO @Stack
SELECT [Category] = c.category_name
, [RootID] = c.category_id
, [ChildID] = c.category_id
, 0
FROM Categories c
WHILE EXISTS (SELECT * FROM @Stack WHERE Visited = 0)
BEGIN
DECLARE @StackID INTEGER
SELECT @StackID = MAX(StackID) FROM @Stack
INSERT INTO @Stack
SELECT st.Category
, st.RootID
, c.category_id
, 0
FROM @Stack st
INNER JOIN Categories c ON c.father_id = st.ChildID
WHERE Visited = 0
UPDATE @Stack
SET Visited = 1
WHERE StackID <= @StackID
END
SELECT st.RootID
, st.Category
, COUNT(s.sales_id)
FROM @Stack st
INNER JOIN Sales s ON s.category_id = st.ChildID
GROUP BY st.RootID, st.Category
ORDER BY st.RootID
SQL 2005 Based solution
A CTE should get you what you want
- Select each category from Categories to be the root item
- recursively add each child of every root item
INNER JOIN the results with your sales table. As every root is in the result of the CTE, a simple GROUP BY is sufficient to get a count for each item.
SQL Statement
;WITH QtyCTE AS (
SELECT [Category] = c.category_name
, [RootID] = c.category_id
, [ChildID] = c.category_id
FROM Categories c
UNION ALL
SELECT cte.Category
, cte.RootID
, c.category_id
FROM QtyCTE cte
INNER JOIN Categories c ON c.father_id = cte.ChildID
)
SELECT cte.RootID
, cte.Category
, COUNT(s.sales_id)
FROM QtyCTE cte
INNER JOIN Sales s ON s.category_id = cte.ChildID
GROUP BY cte.RootID, cte.Category
ORDER BY cte.RootID
qid & accept id:
(2386632, 2386741)
query:
Fetch unique combinations of two field values
soup:
For Ms Access you can try
\nSELECT DISTINCT\n *\nFROM Table1 tM\nWHERE NOT EXISTS(SELECT 1 FROM Table1 t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)\n
\nEDIT:
\nExample with table Data, which is the same...
\nSELECT DISTINCT\n *\nFROM Data tM\nWHERE NOT EXISTS(SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)\n
\nor (Nice and Access Formatted...)
\nSELECT DISTINCT *\nFROM Data AS tM\nWHERE (((Exists (SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source))=False));\n
\n
soup wrap:
For Ms Access you can try
SELECT DISTINCT
*
FROM Table1 tM
WHERE NOT EXISTS(SELECT 1 FROM Table1 t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)
EDIT:
Example with table Data, which is the same...
SELECT DISTINCT
*
FROM Data tM
WHERE NOT EXISTS(SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)
or (Nice and Access Formatted...)
SELECT DISTINCT *
FROM Data AS tM
WHERE (((Exists (SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source))=False));
qid & accept id:
(2401396, 2401595)
query:
Oracle SQL - Column with unix timestamp, need dd-mm-yyyy timestamp
soup:
Given this data ...
\nSQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss'\n 2 /\n\nSession altered.\n\nSQL> select * from t23\n 2 /\n\nMY_TIMESTAMP\n--------------------\n08-mar-2010 13:06:02\n08-mar-2010 13:06:08\n13-mar-1985 13:06:26\n\nSQL> \n
\n.. it is simply a matter of converting the time elapsed since 01-JAN-1970 into seconds:
\nSQL> select my_timestamp\n 2 , (my_timestamp - date '1970-01-01') * 86400 as unix_ts\n 3 from t23\n 4 /\n\nMY_TIMESTAMP UNIX_TS\n-------------------- ----------\n08-mar-2010 13:06:02 1268053562\n08-mar-2010 13:06:08 1268053568\n13-mar-1985 13:06:26 479567186\n\nSQL>\n
\n
soup wrap:
Given this data ...
SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss'
2 /
Session altered.
SQL> select * from t23
2 /
MY_TIMESTAMP
--------------------
08-mar-2010 13:06:02
08-mar-2010 13:06:08
13-mar-1985 13:06:26
SQL>
.. it is simply a matter of converting the time elapsed since 01-JAN-1970 into seconds:
SQL> select my_timestamp
2 , (my_timestamp - date '1970-01-01') * 86400 as unix_ts
3 from t23
4 /
MY_TIMESTAMP UNIX_TS
-------------------- ----------
08-mar-2010 13:06:02 1268053562
08-mar-2010 13:06:08 1268053568
13-mar-1985 13:06:26 479567186
SQL>
qid & accept id:
(2406693, 2406949)
query:
MDX Year on Year Sales by Months
soup:
SELECT {[Time].[2009], [Time].[2010]} ON 0,\n [Time].[Months].Members ON 1\n FROM [Your Cube Name] WHERE [Measures].[Sales]
\nI based that on this query (below) that I've tested on the Adventure Works sample cube from Miscrosoft:
\nSELECT {[Ship Date].[Fiscal Year].&[2002], [Ship Date].[Fiscal Year].&[2003]} ON 0,\n[Ship Date].[Month of Year].Members ON 1\nFROM [Adventure Works] WHERE [Measures].[Sales Amount]\n
\nUPDATE:
\nBased on your query I'm not sure why it is working without specifiying a hierarchy on your cube query (like [Time].[2010] instead of [Time].[Hierarchy Name].[2010]) but could you try this:
\nSELECT EXISTS([Time].Members, {[Time].[2009], [Time].[2010]}) ON COLUMNS, \n[Time].[Months].Members ON ROWS \nFROM [SalesProductIndicator] WHERE [Measures].[Sales] \n
\nThanks
\n
soup wrap:
SELECT {[Time].[2009], [Time].[2010]} ON 0,
[Time].[Months].Members ON 1
FROM [Your Cube Name] WHERE [Measures].[Sales]
I based that on this query (below) that I've tested on the Adventure Works sample cube from Miscrosoft:
SELECT {[Ship Date].[Fiscal Year].&[2002], [Ship Date].[Fiscal Year].&[2003]} ON 0,
[Ship Date].[Month of Year].Members ON 1
FROM [Adventure Works] WHERE [Measures].[Sales Amount]
UPDATE:
Based on your query I'm not sure why it is working without specifiying a hierarchy on your cube query (like [Time].[2010] instead of [Time].[Hierarchy Name].[2010]) but could you try this:
SELECT EXISTS([Time].Members, {[Time].[2009], [Time].[2010]}) ON COLUMNS,
[Time].[Months].Members ON ROWS
FROM [SalesProductIndicator] WHERE [Measures].[Sales]
Thanks
qid & accept id:
(2411210, 2411337)
query:
Finding a sql query to get the latest associated date for each grouping
soup:
select p.*\nfrom (\n select EMPID, DateWorked, Max(EffectiveDate) as MaxEffectiveDate\n from Payroll\n where EffectiveDate <= DateWorked\n group by EMPID, DateWorked\n) pm\ninner join Payroll p on pm.EMPID = p.EMPID and pm.DateWorked = p.DateWorked and pm.MaxEffectiveDate = p.EffectiveDate\n
\nOutput:
\nEMPID DateWorked Hours WageRate EffectiveDate\n----------- ----------------------- ----------- --------------------------------------- -----------------------\n1 2010-01-01 00:00:00.000 10 7.25 2009-06-10 00:00:00.000\n
\n
soup wrap:
select p.*
from (
select EMPID, DateWorked, Max(EffectiveDate) as MaxEffectiveDate
from Payroll
where EffectiveDate <= DateWorked
group by EMPID, DateWorked
) pm
inner join Payroll p on pm.EMPID = p.EMPID and pm.DateWorked = p.DateWorked and pm.MaxEffectiveDate = p.EffectiveDate
Output:
EMPID DateWorked Hours WageRate EffectiveDate
----------- ----------------------- ----------- --------------------------------------- -----------------------
1 2010-01-01 00:00:00.000 10 7.25 2009-06-10 00:00:00.000
qid & accept id:
(2461579, 2461744)
query:
How to join dynamic sql statement in variable with normal statement
soup:
Use temp tables & have the records dumped into it (from the dynamic query) & use the temp table to join with the static query that you have.
\nset @query = 'CREATE table #myTempTable AS\nselect\n HumanResources.Employee.EmployeeID\n ,HumanResources.Employee.LoginID\n ,HumanResources.Employee.Title\n ,HumanResources.EmployeeAddress.AddressID\nfrom\n HumanResources.Employee\n inner join HumanResources.EmployeeAddress\n on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID\n;';\n\nEXEC (@query);\n
\nAnd then
\nselect\n Employees.*\n ,Addresses.City\nfrom\n #myTempTable as Employees\n inner join\n (\n select\n Person.Address.AddressID\n ,Person.Address.City\n from\n Person.Address\n ) as Addresses\n on Employees.AddressID = Addresses.AddressID\n
\n
soup wrap:
Use temp tables & have the records dumped into it (from the dynamic query) & use the temp table to join with the static query that you have.
set @query = 'CREATE table #myTempTable AS
select
HumanResources.Employee.EmployeeID
,HumanResources.Employee.LoginID
,HumanResources.Employee.Title
,HumanResources.EmployeeAddress.AddressID
from
HumanResources.Employee
inner join HumanResources.EmployeeAddress
on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID
;';
EXEC (@query);
And then
select
Employees.*
,Addresses.City
from
#myTempTable as Employees
inner join
(
select
Person.Address.AddressID
,Person.Address.City
from
Person.Address
) as Addresses
on Employees.AddressID = Addresses.AddressID
qid & accept id:
(2466091, 2466136)
query:
SQL to return dates that fall in period and range
soup:
For days use DATEDIFF and the modulo operation:
\nSELECT * FROM dates\nWHERE `date` BETWEEN '1987-10-20' AND '1988-1-1'\nAND DATEDIFF(`date`, '1987-10-20') % 10 = 0\n
\nFor a period of 10 years, calculate the difference in the year modulo the period, and ensure that the month and day are the same:
\nSELECT * FROM dates\nWHERE `date` BETWEEN '1980-10-20' AND '2000-10-20'\nAND MONTH(date) = 10 AND DAY(date) = 20 AND (YEAR(date) - 1980) % 10 = 0\n
\nA period measured in months is not well-defined because months have different lengths. What is one month later than January 30th? You can get it working for some special cases such as 'first in the month'.
\n
soup wrap:
For days use DATEDIFF and the modulo operation:
SELECT * FROM dates
WHERE `date` BETWEEN '1987-10-20' AND '1988-1-1'
AND DATEDIFF(`date`, '1987-10-20') % 10 = 0
For a period of 10 years, calculate the difference in the year modulo the period, and ensure that the month and day are the same:
SELECT * FROM dates
WHERE `date` BETWEEN '1980-10-20' AND '2000-10-20'
AND MONTH(date) = 10 AND DAY(date) = 20 AND (YEAR(date) - 1980) % 10 = 0
A period measured in months is not well-defined because months have different lengths. What is one month later than January 30th? You can get it working for some special cases such as 'first in the month'.
qid & accept id:
(2473843, 2473860)
query:
MySQL: Select remaining rows
soup:
Use:
\n SELECT t.name\n FROM TOOLS t\nLEFT JOIN INSTALLS i ON i.tool_id = t.id\n AND i.user_id = 99\n WHERE i.id IS NULL\n
\nAlternately, you can use NOT EXISTS:
\nSELECT t.name\n FROM TOOLS t\n WHERE NOT EXISTS(SELECT NULL \n FROM INSTALLS i\n WHERE i.tool_id = t.id\n AND i.user_id = 99)\n
\n...or NOT IN:
\nSELECT t.name\n FROM TOOLS t\n WHERE t.id NOT IN (SELECT i.tool_id\n FROM INSTALLS i\n WHERE i.user_id = 99)\n
\nOf the three options, the LEFT JOIN/IS NULL is the most efficient on MySQL. You can read more about it in this article.
\n
soup wrap:
Use:
SELECT t.name
FROM TOOLS t
LEFT JOIN INSTALLS i ON i.tool_id = t.id
AND i.user_id = 99
WHERE i.id IS NULL
Alternately, you can use NOT EXISTS:
SELECT t.name
FROM TOOLS t
WHERE NOT EXISTS(SELECT NULL
FROM INSTALLS i
WHERE i.tool_id = t.id
AND i.user_id = 99)
...or NOT IN:
SELECT t.name
FROM TOOLS t
WHERE t.id NOT IN (SELECT i.tool_id
FROM INSTALLS i
WHERE i.user_id = 99)
Of the three options, the LEFT JOIN/IS NULL is the most efficient on MySQL. You can read more about it in this article.
qid & accept id:
(2507933, 2536202)
query:
Formatting the output of an SQL query
soup:
I have found a way out of it.\nWe can use concatenation here,
\nselect name,id,location from employee;\n
\ngives us 2 different columns, but not in CSV format.
\nI did
\nselect name||','||id||','||location from employee;\n
\nWe get the output in a CSV format. It has just concatenated the output with commas (,).
\n
soup wrap:
I have found a way out of it.
We can use concatenation here,
select name,id,location from employee;
gives us 2 different columns, but not in CSV format.
I did
select name||','||id||','||location from employee;
We get the output in a CSV format. It has just concatenated the output with commas (,).
qid & accept id:
(2524600, 2527255)
query:
How do I join three tables with SQLalchemy and keeping all of the columns in one of the tables?
soup:
Option-1:
\nSubscription is just a many-to-many relation object, and I would suggest that you model it as such rather then as a separate class. See Configuring Many-to-Many Relationships documentation of SQLAlchemy/declarative.
\nYou model with the test code becomes:
\nfrom sqlalchemy import create_engine, Column, Integer, DateTime, String, ForeignKey, Table\nfrom sqlalchemy.orm import relation, scoped_session, sessionmaker, eagerload\nfrom sqlalchemy.ext.declarative import declarative_base\n\nengine = create_engine('sqlite:///:memory:', echo=True)\nsession = scoped_session(sessionmaker(bind=engine, autoflush=True))\nBase = declarative_base()\n\nt_subscription = Table('subscription', Base.metadata,\n Column('userId', Integer, ForeignKey('user.id')),\n Column('channelId', Integer, ForeignKey('channel.id')),\n)\n\nclass Channel(Base):\n __tablename__ = 'channel'\n\n id = Column(Integer, primary_key = True)\n title = Column(String)\n description = Column(String)\n link = Column(String)\n pubDate = Column(DateTime)\n\nclass User(Base):\n __tablename__ = 'user'\n\n id = Column(Integer, primary_key = True)\n username = Column(String)\n password = Column(String)\n sessionId = Column(String)\n\n channels = relation("Channel", secondary=t_subscription)\n\n# NOTE: no need for this class\n# class Subscription(Base):\n # ...\n\nBase.metadata.create_all(engine)\n\n\n# ######################\n# Add test data\nc1 = Channel()\nc1.title = 'channel-1'\nc2 = Channel()\nc2.title = 'channel-2'\nc3 = Channel()\nc3.title = 'channel-3'\nc4 = Channel()\nc4.title = 'channel-4'\nsession.add(c1)\nsession.add(c2)\nsession.add(c3)\nsession.add(c4)\nu1 = User()\nu1.username ='user1'\nsession.add(u1)\nu1.channels.append(c1)\nu1.channels.append(c3)\nu2 = User()\nu2.username ='user2'\nsession.add(u2)\nu2.channels.append(c2)\nsession.commit()\n\n\n# ######################\n# clean the session and test the code\nsession.expunge_all()\n\n# retrieve all (I assume those are not that many)\nchannels = session.query(Channel).all()\n\n# get subscription info for the user\n#q = session.query(User)\n# use eagerload(...) so that all 'subscription' table data is loaded with the user itself, and not as a separate query\nq = session.query(User).options(eagerload(User.channels))\nfor u in q.all():\n for c in channels:\n print (c.id, c.title, (c in u.channels))\n
\nwhich produces following output:
\n(1, u'channel-1', True)\n(2, u'channel-2', False)\n(3, u'channel-3', True)\n(4, u'channel-4', False)\n(1, u'channel-1', False)\n(2, u'channel-2', True)\n(3, u'channel-3', False)\n(4, u'channel-4', False)\n
\nPlease note the use of eagerload, which will issue only 1 SELECT statement instead of 1 for each User when channels are asked for.
\nOption-2:
\nBut if you want to keep you model and just create an SA query that would give you the columns as you ask, following query should do the job:
\nfrom sqlalchemy import and_\nfrom sqlalchemy.sql.expression import case\n#...\nq = (session.query(#User.username, \n Channel.id, Channel.title, \n case([(Subscription.channelId == None, False)], else_=True)\n ).outerjoin((Subscription, \n and_(Subscription.userId==User.id, \n Subscription.channelId==Channel.id))\n )\n )\n# optionally filter by user\nq = q.filter(User.id == uid()) # assuming uid() is the function that provides user.id\nq = q.filter(User.sessionId == id()) # assuming uid() is the function that provides user.sessionId\nres = q.all()\nfor r in res:\n print r\n
\nThe output is absolutely the same as in the option-1 above.
\n
soup wrap:
Option-1:
Subscription is just a many-to-many relation object, and I would suggest that you model it as such rather then as a separate class. See Configuring Many-to-Many Relationships documentation of SQLAlchemy/declarative.
You model with the test code becomes:
from sqlalchemy import create_engine, Column, Integer, DateTime, String, ForeignKey, Table
from sqlalchemy.orm import relation, scoped_session, sessionmaker, eagerload
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///:memory:', echo=True)
session = scoped_session(sessionmaker(bind=engine, autoflush=True))
Base = declarative_base()
t_subscription = Table('subscription', Base.metadata,
Column('userId', Integer, ForeignKey('user.id')),
Column('channelId', Integer, ForeignKey('channel.id')),
)
class Channel(Base):
__tablename__ = 'channel'
id = Column(Integer, primary_key = True)
title = Column(String)
description = Column(String)
link = Column(String)
pubDate = Column(DateTime)
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key = True)
username = Column(String)
password = Column(String)
sessionId = Column(String)
channels = relation("Channel", secondary=t_subscription)
# NOTE: no need for this class
# class Subscription(Base):
# ...
Base.metadata.create_all(engine)
# ######################
# Add test data
c1 = Channel()
c1.title = 'channel-1'
c2 = Channel()
c2.title = 'channel-2'
c3 = Channel()
c3.title = 'channel-3'
c4 = Channel()
c4.title = 'channel-4'
session.add(c1)
session.add(c2)
session.add(c3)
session.add(c4)
u1 = User()
u1.username ='user1'
session.add(u1)
u1.channels.append(c1)
u1.channels.append(c3)
u2 = User()
u2.username ='user2'
session.add(u2)
u2.channels.append(c2)
session.commit()
# ######################
# clean the session and test the code
session.expunge_all()
# retrieve all (I assume those are not that many)
channels = session.query(Channel).all()
# get subscription info for the user
#q = session.query(User)
# use eagerload(...) so that all 'subscription' table data is loaded with the user itself, and not as a separate query
q = session.query(User).options(eagerload(User.channels))
for u in q.all():
for c in channels:
print (c.id, c.title, (c in u.channels))
which produces following output:
(1, u'channel-1', True)
(2, u'channel-2', False)
(3, u'channel-3', True)
(4, u'channel-4', False)
(1, u'channel-1', False)
(2, u'channel-2', True)
(3, u'channel-3', False)
(4, u'channel-4', False)
Please note the use of eagerload, which will issue only 1 SELECT statement instead of 1 for each User when channels are asked for.
Option-2:
But if you want to keep you model and just create an SA query that would give you the columns as you ask, following query should do the job:
from sqlalchemy import and_
from sqlalchemy.sql.expression import case
#...
q = (session.query(#User.username,
Channel.id, Channel.title,
case([(Subscription.channelId == None, False)], else_=True)
).outerjoin((Subscription,
and_(Subscription.userId==User.id,
Subscription.channelId==Channel.id))
)
)
# optionally filter by user
q = q.filter(User.id == uid()) # assuming uid() is the function that provides user.id
q = q.filter(User.sessionId == id()) # assuming uid() is the function that provides user.sessionId
res = q.all()
for r in res:
print r
The output is absolutely the same as in the option-1 above.
qid & accept id:
(2559110, 2559392)
query:
Is it possible to write a query which returns a date for every day between two specified days?
soup:
Here's an example from postgres, I hope the dialects are comparable in regards to recursive
\nWITH RECURSIVE t(n) AS (\n VALUES (1)\n UNION ALL\n SELECT n+1 FROM t WHERE n < 20\n)\nSELECT n FROM t;\n
\n...will return 20 records, numbers from 1 to 20\nCast/convert these to dates and there you are
\nUPDATE:\nSorry, don't have ORA here, but according to this article
\nSELECT\n SYS_CONNECT_BY_PATH(DUMMY, '/')\nFROM\n DUAL\nCONNECT BY\n LEVEL<4;\n
\ngives
\nSYS_CONNECT_BY_PATH(DUMMY,'/')\n--------------------------------\n/X\n/X/X\n/X/X/X\n
\nIt is also stated that this is supposed to be very efficient way to generate rows.\nIf ROWNUM can be used in the above select and if variable can be used in LEVEL condition then solution can be worked out.
\nUPDATE2:
\nAnd indeed there are several options.
\nSELECT (CAST('01-JAN-2010' AS DATE) + (ROWNUM - 1)) n\nFROM ( SELECT 1 just_a_column\n FROM dual\n CONNECT BY LEVEL <= 20\n )\n
\norafaq states that: 'It should be noted that in later versions of oracle, at least as far back as 10gR1, operations against dual are optimized such that they require no logical or physical I/O operations. This makes them quite fast.', so I would say this is not completely esoteric.
\n
soup wrap:
Here's an example from postgres, I hope the dialects are comparable in regards to recursive
WITH RECURSIVE t(n) AS (
VALUES (1)
UNION ALL
SELECT n+1 FROM t WHERE n < 20
)
SELECT n FROM t;
...will return 20 records, numbers from 1 to 20
Cast/convert these to dates and there you are
UPDATE:
Sorry, don't have ORA here, but according to this article
SELECT
SYS_CONNECT_BY_PATH(DUMMY, '/')
FROM
DUAL
CONNECT BY
LEVEL<4;
gives
SYS_CONNECT_BY_PATH(DUMMY,'/')
--------------------------------
/X
/X/X
/X/X/X
It is also stated that this is supposed to be very efficient way to generate rows.
If ROWNUM can be used in the above select and if variable can be used in LEVEL condition then solution can be worked out.
UPDATE2:
And indeed there are several options.
SELECT (CAST('01-JAN-2010' AS DATE) + (ROWNUM - 1)) n
FROM ( SELECT 1 just_a_column
FROM dual
CONNECT BY LEVEL <= 20
)
orafaq states that: 'It should be noted that in later versions of oracle, at least as far back as 10gR1, operations against dual are optimized such that they require no logical or physical I/O operations. This makes them quite fast.', so I would say this is not completely esoteric.
qid & accept id:
(2563918, 2564009)
query:
Create a Cumulative Sum Column in MySQL
soup:
If performance is an issue, you could use a MySQL variable:
\nset @csum := 0;\nupdate YourTable\nset cumulative_sum = (@csum := @csum + count)\norder by id;\n
\nAlternatively, you could remove the cumulative_sum column and calculate it on each query:
\nset @csum := 0;\nselect id, count, (@csum := @csum + count) as cumulative_sum\nfrom YourTable\norder by id;\n
\nThis calculates the running sum in a running way :)
\n
soup wrap:
If performance is an issue, you could use a MySQL variable:
set @csum := 0;
update YourTable
set cumulative_sum = (@csum := @csum + count)
order by id;
Alternatively, you could remove the cumulative_sum column and calculate it on each query:
set @csum := 0;
select id, count, (@csum := @csum + count) as cumulative_sum
from YourTable
order by id;
This calculates the running sum in a running way :)
qid & accept id:
(2588304, 2588972)
query:
SQL query multi table selection
soup:
Lots of same answers here. For some reason, though, all of them are joining the Section table which is (likely) not necessary.
\nselect\n p.*\n\nfrom\n Product p,\n Category c\n\nwhere\n p.category_id = c.id and\n c.section_id = 123\n;\n
\n
\nExplicit ANSI JOIN syntax per @nemiss's request:
\nselect\n p.*\n\nfrom Product p\n\njoin Category c\n on c.id = p.category_id\n and c.section_id = 123\n;\n
\n
\nPossible reason to include Section table: Selecting products based on Section name (instead of ID).
\nselect\n p.*\n\nfrom Product p\n\njoin Category c\n on c.id = p.category_id\n\njoin Section s\n on s.id = c.section_id\n and s.name = 'Books'\n;\n
\nIf doing this, you'll want to make sure Section.name is indexed
\nalter table Product add index name;\n
\n
soup wrap:
Lots of same answers here. For some reason, though, all of them are joining the Section table which is (likely) not necessary.
select
p.*
from
Product p,
Category c
where
p.category_id = c.id and
c.section_id = 123
;
Explicit ANSI JOIN syntax per @nemiss's request:
select
p.*
from Product p
join Category c
on c.id = p.category_id
and c.section_id = 123
;
Possible reason to include Section table: Selecting products based on Section name (instead of ID).
select
p.*
from Product p
join Category c
on c.id = p.category_id
join Section s
on s.id = c.section_id
and s.name = 'Books'
;
If doing this, you'll want to make sure Section.name is indexed
alter table Product add index name;
qid & accept id:
(2640048, 2640090)
query:
SQL: how to get the left 3 numbers from an int
soup:
For SQL Server, the easiest way would definitely be:
\nSELECT CAST(LEFT(CAST(YourInt AS VARCHAR(100)), 3) AS INT)\n
\nConvert to string, take the left most three characters, and convert those back to an INT.
\nDoing it purely on the numerical value gets messy since you need to know how many digits you need to get rid of and so forth...
\nIf you want to use purely only INT's, you'd have to construct something like this (at least you could do this in SQL Server - I'm not familiar enough with Access to know if that'll work in the Access SQL "dialect"):
\nDECLARE @MyInt INT = 1234567\n\nSELECT\n CASE \n WHEN @MyInt < 1000 THEN @MyInt\n WHEN @MyInt > 10000000 THEN @MyInt / 100000\n WHEN @MyInt > 1000000 THEN @MyInt / 10000\n WHEN @MyInt > 100000 THEN @MyInt / 1000\n WHEN @MyInt > 10000 THEN @MyInt / 100\n WHEN @MyInt > 1000 THEN @MyInt / 10\n END AS 'NewInt'\n
\nBut that's always an approximation - what if you have a really really really large number..... it might just fall through the cracks....
\n
soup wrap:
For SQL Server, the easiest way would definitely be:
SELECT CAST(LEFT(CAST(YourInt AS VARCHAR(100)), 3) AS INT)
Convert to string, take the left most three characters, and convert those back to an INT.
Doing it purely on the numerical value gets messy since you need to know how many digits you need to get rid of and so forth...
If you want to use purely only INT's, you'd have to construct something like this (at least you could do this in SQL Server - I'm not familiar enough with Access to know if that'll work in the Access SQL "dialect"):
DECLARE @MyInt INT = 1234567
SELECT
CASE
WHEN @MyInt < 1000 THEN @MyInt
WHEN @MyInt > 10000000 THEN @MyInt / 100000
WHEN @MyInt > 1000000 THEN @MyInt / 10000
WHEN @MyInt > 100000 THEN @MyInt / 1000
WHEN @MyInt > 10000 THEN @MyInt / 100
WHEN @MyInt > 1000 THEN @MyInt / 10
END AS 'NewInt'
But that's always an approximation - what if you have a really really really large number..... it might just fall through the cracks....
qid & accept id:
(2651249, 2652259)
query:
wanted to get all dates in mysql result
soup:
There is an approach that can do this in pure SQL but it has limitations.
\nFirst you need to have a number sequence 1,2,3...n as rows (assume select row from rows return that).
\nThen you can left join on this and convert to dates based on number of days between min and max.
\n select @min_join_on := (select min(join_on) from user);\n select @no_rows := (select datediff(max(join_on), @min_join_on) from user)+1;\n
\nwill give you the required number of rows, which then you can use to
\n select adddate(@min_join_on, interval row day) from rows where row <= @no_rows;\n
\nwill return a required sequence of dates on which then you can do a left join back to the users table.
\nUsing variables can be avoided if you use sub queries, I broke it down for readability.
\nNow, the problem is that the number of rows in table rows has to be bigger then @no_rows.\nFor 10,000 rows you can work with date ranges of up to 27 years, with 100,000 rows you can work with date ranges of up to 273 years (this feels really bad, but I am afraid that if you don't want to use stored procedures it will have to look and feel awkward).
\nSo, if you can work with such fixed date ranges you can even substitute the table with the query, such as this
\nSELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t3, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t4, (SELECT @row:=0) r\n
\nwhich will produce 10,000 rows going from 1 to 10,000 and it will not be terribly inefficient at it.
\nSo at the end it is doable in a single query.
\ncreate table user(id INT NOT NULL AUTO_INCREMENT, name varchar(100), join_on date, PRIMARY KEY(id));\n\nmysql> select * from user;\n+----+-------+------------+\n| id | name | join_on |\n+----+-------+------------+\n| 1 | user1 | 2010-04-02 | \n| 2 | user2 | 2010-04-04 | \n| 3 | user3 | 2010-04-08 | \n| 4 | user4 | 2010-04-08 | \n+----+-------+------------+\n4 rows in set (0.00 sec)\n\ninsert into user values (null, 'user1', '2010-04-02'), (null, 'user2', '2010-04-04'), (null, 'user3', '2010-04-08'), (null, 'user4', '2010-04-08')\n\n\nSELECT date, count(id)\nFROM (\nSELECT adddate((select min(join_on) from user), row-1) as date \nFROM ( \nSELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (SELECT @row:=0) r ) n \nWHERE n.row <= ( select datediff(max(join_on), min(join_on)) from user) + 1\n) dr LEFT JOIN user u ON dr.date = u.join_on\nGROUP BY dr.date\n\n+------------+-----------+\n| date | count(id) |\n+------------+-----------+\n| 2010-04-02 | 1 | \n| 2010-04-03 | 0 | \n| 2010-04-04 | 1 | \n| 2010-04-05 | 0 | \n| 2010-04-06 | 0 | \n| 2010-04-07 | 0 | \n| 2010-04-08 | 2 | \n+------------+-----------+\n7 rows in set (0.00 sec)\n
\n
soup wrap:
There is an approach that can do this in pure SQL but it has limitations.
First you need to have a number sequence 1,2,3...n as rows (assume select row from rows return that).
Then you can left join on this and convert to dates based on number of days between min and max.
select @min_join_on := (select min(join_on) from user);
select @no_rows := (select datediff(max(join_on), @min_join_on) from user)+1;
will give you the required number of rows, which then you can use to
select adddate(@min_join_on, interval row day) from rows where row <= @no_rows;
will return a required sequence of dates on which then you can do a left join back to the users table.
Using variables can be avoided if you use sub queries, I broke it down for readability.
Now, the problem is that the number of rows in table rows has to be bigger then @no_rows.
For 10,000 rows you can work with date ranges of up to 27 years, with 100,000 rows you can work with date ranges of up to 273 years (this feels really bad, but I am afraid that if you don't want to use stored procedures it will have to look and feel awkward).
So, if you can work with such fixed date ranges you can even substitute the table with the query, such as this
SELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t3, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t4, (SELECT @row:=0) r
which will produce 10,000 rows going from 1 to 10,000 and it will not be terribly inefficient at it.
So at the end it is doable in a single query.
create table user(id INT NOT NULL AUTO_INCREMENT, name varchar(100), join_on date, PRIMARY KEY(id));
mysql> select * from user;
+----+-------+------------+
| id | name | join_on |
+----+-------+------------+
| 1 | user1 | 2010-04-02 |
| 2 | user2 | 2010-04-04 |
| 3 | user3 | 2010-04-08 |
| 4 | user4 | 2010-04-08 |
+----+-------+------------+
4 rows in set (0.00 sec)
insert into user values (null, 'user1', '2010-04-02'), (null, 'user2', '2010-04-04'), (null, 'user3', '2010-04-08'), (null, 'user4', '2010-04-08')
SELECT date, count(id)
FROM (
SELECT adddate((select min(join_on) from user), row-1) as date
FROM (
SELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (SELECT @row:=0) r ) n
WHERE n.row <= ( select datediff(max(join_on), min(join_on)) from user) + 1
) dr LEFT JOIN user u ON dr.date = u.join_on
GROUP BY dr.date
+------------+-----------+
| date | count(id) |
+------------+-----------+
| 2010-04-02 | 1 |
| 2010-04-03 | 0 |
| 2010-04-04 | 1 |
| 2010-04-05 | 0 |
| 2010-04-06 | 0 |
| 2010-04-07 | 0 |
| 2010-04-08 | 2 |
+------------+-----------+
7 rows in set (0.00 sec)
qid & accept id:
(2695116, 2697554)
query:
Update multiple table column values using single query
soup:
/** XXX CODING HORROR... */\n
\nDepending on your needs, you could use an updateable view. You create a view of your base tables and add an "instead of" trigger to this view and you update the view directly.
\nSome example tables:
\ncreate table party (\n party_id integer,\n employee_id integer\n );\n\ncreate table party_name (\n party_id integer,\n first_name varchar2(120 char),\n last_name varchar2(120 char)\n );\n\ninsert into party values (1,1000); \ninsert into party values (2,2000);\ninsert into party values (3,3000);\n\ninsert into party_name values (1,'Kipper','Family');\ninsert into party_name values (2,'Biff','Family');\ninsert into party_name values (3,'Chip','Family');\n\ncommit;\n\nselect * from party_v;\n\nPARTY_ID EMPLOYEE_ID FIRST_NAME LAST_NAME\n1 1000 Kipper Family\n2 2000 Biff Family\n3 3000 Chip Family\n
\n... then create an updateable view
\ncreate or replace view party_v\nas\nselect\n p.party_id,\n p.employee_id,\n n.first_name,\n n.last_name\nfrom\n party p left join party_name n on p.party_id = n.party_id;\n\ncreate or replace trigger trg_party_update\ninstead of update on party_v \nfor each row\ndeclare\nbegin\n--\n update party\n set\n party_id = :new.party_id,\n employee_id = :new.employee_id\n where\n party_id = :old.party_id;\n--\n update party_name\n set\n party_id = :new.party_id,\n first_name = :new.first_name,\n last_name = :new.last_name\n where\n party_id = :old.party_id;\n--\nend;\n/\n
\nYou can now update the view directly...
\nupdate party_v\nset\n employee_id = 42,\n last_name = 'Oxford'\nwhere\n party_id = 1;\n\nselect * from party_v;\n\nPARTY_ID EMPLOYEE_ID FIRST_NAME LAST_NAME\n1 42 Kipper Oxford\n2 2000 Biff Family\n3 3000 Chip Family\n
\n
soup wrap:
/** XXX CODING HORROR... */
Depending on your needs, you could use an updateable view. You create a view of your base tables and add an "instead of" trigger to this view and you update the view directly.
Some example tables:
create table party (
party_id integer,
employee_id integer
);
create table party_name (
party_id integer,
first_name varchar2(120 char),
last_name varchar2(120 char)
);
insert into party values (1,1000);
insert into party values (2,2000);
insert into party values (3,3000);
insert into party_name values (1,'Kipper','Family');
insert into party_name values (2,'Biff','Family');
insert into party_name values (3,'Chip','Family');
commit;
select * from party_v;
PARTY_ID EMPLOYEE_ID FIRST_NAME LAST_NAME
1 1000 Kipper Family
2 2000 Biff Family
3 3000 Chip Family
... then create an updateable view
create or replace view party_v
as
select
p.party_id,
p.employee_id,
n.first_name,
n.last_name
from
party p left join party_name n on p.party_id = n.party_id;
create or replace trigger trg_party_update
instead of update on party_v
for each row
declare
begin
--
update party
set
party_id = :new.party_id,
employee_id = :new.employee_id
where
party_id = :old.party_id;
--
update party_name
set
party_id = :new.party_id,
first_name = :new.first_name,
last_name = :new.last_name
where
party_id = :old.party_id;
--
end;
/
You can now update the view directly...
update party_v
set
employee_id = 42,
last_name = 'Oxford'
where
party_id = 1;
select * from party_v;
PARTY_ID EMPLOYEE_ID FIRST_NAME LAST_NAME
1 42 Kipper Oxford
2 2000 Biff Family
3 3000 Chip Family
qid & accept id:
(2746331, 2746350)
query:
How to retrieve the rows (with maximum value in a field) having a another common field?
soup:
This:
\nWITH q AS\n (\n SELECT *, ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field3 DESC) AS rn\n FROM table1\n )\nSELECT *\nFROM q\nWHERE rn = 1\n
\nor this:
\nSELECT q.*\nFROM (\n SELECT DISTINCT field2\n FROM table1\n ) qo\nCROSS APPLY\n (\n SELECT TOP 1 *\n FROM table1 t\n WHERE t.field2 = qo.field2\n ORDER BY\n t.field3 DESC\n ) q\n
\nDepending on the field2 cardinality, the first or the second query can be more efficient.
\nSee this article for more details:
\n\n
soup wrap:
This:
WITH q AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field3 DESC) AS rn
FROM table1
)
SELECT *
FROM q
WHERE rn = 1
or this:
SELECT q.*
FROM (
SELECT DISTINCT field2
FROM table1
) qo
CROSS APPLY
(
SELECT TOP 1 *
FROM table1 t
WHERE t.field2 = qo.field2
ORDER BY
t.field3 DESC
) q
Depending on the field2 cardinality, the first or the second query can be more efficient.
See this article for more details:
qid & accept id:
(2769007, 2769023)
query:
formula for computed column based on different table's column
soup:
You could create a user-defined function for this:
\nCREATE FUNCTION dbo.GetValue(INT @ncode, INT @recid)\nRETURNS INT\nAS \n SELECT @recid * nvalue \n FROM c_const \n WHERE code = @ncode\n
\nand then use that to define your computed column:
\nALTER TABLE dbo.YourTable\n ADD NewColumnName AS dbo.GetValue(ncodeValue, recIdValue)\n
\n
soup wrap:
You could create a user-defined function for this:
CREATE FUNCTION dbo.GetValue(INT @ncode, INT @recid)
RETURNS INT
AS
SELECT @recid * nvalue
FROM c_const
WHERE code = @ncode
and then use that to define your computed column:
ALTER TABLE dbo.YourTable
ADD NewColumnName AS dbo.GetValue(ncodeValue, recIdValue)
qid & accept id:
(2781315, 2781396)
query:
SQL Statement to update the date
soup:
Dates are not strings, but either of the following will result in a date:
\nDATE [Table] SET `Birthdate` = CDate('1993-08-02 00:00:00.0') WHERE `ID` = 000\n
\n(see the documentation for CDate)
\nDATE [Table] SET `Birthdate` = #08/02/1993# WHERE `ID` = 000\n
\n
soup wrap:
Dates are not strings, but either of the following will result in a date:
DATE [Table] SET `Birthdate` = CDate('1993-08-02 00:00:00.0') WHERE `ID` = 000
(see the documentation for CDate)
DATE [Table] SET `Birthdate` = #08/02/1993# WHERE `ID` = 000
qid & accept id:
(2781419, 2781452)
query:
Optimal way to convert to date
soup:
try this:
\nCONVERT(DATETIME, CONVERT(NVARCHAR, YYYYMMDD))\n
\nFor example:
\nSELECT CONVERT(DATETIME, CONVERT(NVARCHAR, 20100401))\n
\nResults in:
\n2010-04-01 00:00:00.000\n
\n
soup wrap:
try this:
CONVERT(DATETIME, CONVERT(NVARCHAR, YYYYMMDD))
For example:
SELECT CONVERT(DATETIME, CONVERT(NVARCHAR, 20100401))
Results in:
2010-04-01 00:00:00.000
qid & accept id:
(2788575, 2788639)
query:
tsql script to add delete cascade to existing tables
soup:
ALTER TABLE [wm].[TABLE_NAME] WITH NOCHECK ADD CONSTRAINT [FK_TABLE_NAME_PARENT_TABLE_NAME] FOREIGN KEY([FOREIGN_KEY])\nREFERENCES [wm].[PARENT_TABLE_NAME] ([PRIVATE_KEY])\nON DELETE CASCADE\nGO\n
\n\nTABLE_NAME: name of the table where the children are stored. \nPARENT_TABLE_NAME: name of the table where the parents are stored.\nThis placeholders can be equal \nFK_TABLE_NAME_PARENT_TABLE_NAME: just name for the constraint \nFOREIGN_KEY: field in the child table for the connection with the parents, for example - ParentID \nPRIMARY_KEY: field in the parents table, for example - ID \n
\n
\nALTER TABLE [wm].[Thumbs] WITH NOCHECK ADD CONSTRAINT [FK_Thumbs_Documents] FOREIGN KEY([DocID])\nREFERENCES [wm].[Documents] ([ID])\nON DELETE CASCADE\nGO\n
\n
soup wrap:
ALTER TABLE [wm].[TABLE_NAME] WITH NOCHECK ADD CONSTRAINT [FK_TABLE_NAME_PARENT_TABLE_NAME] FOREIGN KEY([FOREIGN_KEY])
REFERENCES [wm].[PARENT_TABLE_NAME] ([PRIVATE_KEY])
ON DELETE CASCADE
GO
TABLE_NAME: name of the table where the children are stored.
PARENT_TABLE_NAME: name of the table where the parents are stored.
This placeholders can be equal
FK_TABLE_NAME_PARENT_TABLE_NAME: just name for the constraint
FOREIGN_KEY: field in the child table for the connection with the parents, for example - ParentID
PRIMARY_KEY: field in the parents table, for example - ID
ALTER TABLE [wm].[Thumbs] WITH NOCHECK ADD CONSTRAINT [FK_Thumbs_Documents] FOREIGN KEY([DocID])
REFERENCES [wm].[Documents] ([ID])
ON DELETE CASCADE
GO
qid & accept id:
(2792388, 2792436)
query:
SQL Reset Identity ID in already populated table
soup:
The easiest way would be to make a copy of the current table, fix up any parentid issues, drop it and then rename the new one.
\nYou could also temporarily remove the IDENTITY and try the folowing:
\n;WITH TBL AS\n(\n SELECT *, ROW_NUMBER(ORDER BY ID) AS RN\n FROM CURRENT_TABLE\n)\nUPDATE TBL\nSET ID = RN\n
\nOr, if you don't care about the order of the records, this
\nDECLARE INT @id;\nSET @id = 0;\n\nUPDATE CURRENT_TABLE\nSET @id = ID = @id + 1;\n
\n
soup wrap:
The easiest way would be to make a copy of the current table, fix up any parentid issues, drop it and then rename the new one.
You could also temporarily remove the IDENTITY and try the folowing:
;WITH TBL AS
(
SELECT *, ROW_NUMBER(ORDER BY ID) AS RN
FROM CURRENT_TABLE
)
UPDATE TBL
SET ID = RN
Or, if you don't care about the order of the records, this
DECLARE INT @id;
SET @id = 0;
UPDATE CURRENT_TABLE
SET @id = ID = @id + 1;
qid & accept id:
(2884295, 2884315)
query:
Help with constructing a conditional SQL statement
soup:
Naively:
\nSELECT *\nFROM Entries\nWHERE Language = 'Swedish' \n\nUNION ALL\n\nSELECT *\nFROM Entries\nWHERE Language = 'English' \n AND NOT EXISTS (\n SELECT *\n FROM Entries\n WHERE Language = 'Swedish' \n )\n
\nor:
\nSELECT *\nFROM Entries\nWHERE Language = 'Swedish' \n OR (Language = 'English' \n AND NOT EXISTS (\n SELECT *\n FROM Entries\n WHERE Language = 'Swedish' \n )\n )\n
\n
soup wrap:
Naively:
SELECT *
FROM Entries
WHERE Language = 'Swedish'
UNION ALL
SELECT *
FROM Entries
WHERE Language = 'English'
AND NOT EXISTS (
SELECT *
FROM Entries
WHERE Language = 'Swedish'
)
or:
SELECT *
FROM Entries
WHERE Language = 'Swedish'
OR (Language = 'English'
AND NOT EXISTS (
SELECT *
FROM Entries
WHERE Language = 'Swedish'
)
)
qid & accept id:
(2900217, 2900250)
query:
Getting age in years in a SQL query
soup:
Assuming birthday is stored as a DateTime
\nSelect Count(*)\nFrom (\n Select Id, Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) As Age\n From People\n ) As EmpAges\nWhere EmpAges Between 20 And 40\n
\nThis could also be written without the derived table like so:
\nSelect Count(*)\nFrom People\nWhere Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) Between 20 And 40\n
\nYet another way would be to use DateAdd. As OMG Ponies and ck mentioned, this one would be the most efficient of the bunch as it would enable the use of an index on dateOfBirth if it existed.
\nSelect Count(*)\nFrom People\nWhere DateOfBirth Between DateAdd(yy, -40, GetDate()) And DateAdd(yy, -20, GetDate())\n
\n
soup wrap:
Assuming birthday is stored as a DateTime
Select Count(*)
From (
Select Id, Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) As Age
From People
) As EmpAges
Where EmpAges Between 20 And 40
This could also be written without the derived table like so:
Select Count(*)
From People
Where Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) Between 20 And 40
Yet another way would be to use DateAdd. As OMG Ponies and ck mentioned, this one would be the most efficient of the bunch as it would enable the use of an index on dateOfBirth if it existed.
Select Count(*)
From People
Where DateOfBirth Between DateAdd(yy, -40, GetDate()) And DateAdd(yy, -20, GetDate())
qid & accept id:
(2913338, 2913370)
query:
In mySQL, Is it possible to SELECT from two tables and merge the columns?
soup:
You can combine columns from both tables using (id,name) as the joining criteria with:
\nselect\n a.id as id,\n a.name as name,\n a.somefield1 || ' ' || b.somefied1 as somefield1\nfrom tablea a, tableb b\nwhere a.id = b.id\n and a.name = b.name\n and b.name = 'mooseburgers';\n
\nIf you want to join on just the (id) and combine the name and somefield1 columns:
\nselect\n a.id as id,\n a.name || ' ' || b.name as name,\n a.somefield1 || ' ' || b.somefied1 as somefield1\nfrom tablea a, tableb b\nwhere a.id = b.id\n and b.name = 'mooseburgers';\n
\nAlthough I have to admit this is a rather unusual way of doing things. I assume you have your reasons however :-)
\nIf I've misunderstood your question and you just want a more conventional union of the two tables, use something like:
\nselect id, name, somefield1, '' as somefield2 from tablea where name = 'mooseburgers'\nunion all\nselect id, name, somefield1, somefield2 from tableb where name = 'mooseburgers'\n
\nThis won't combine rows but will instead just append the rows from the two queries. Use union on its own if you want to remove duplicate rows but, if you're certain there are no duplicates or you don't want them removed, union all is often more efficient.
\n
\nBased on your edit, the actual query would be:
\nselect name, somefield1 from tablea where name = 'zoot'\nunion all\nselect name, somefield1 from tableb where name = 'zoot'\n
\n(or union if you don't want duplicates where a.name==b.name=='zoot' and a.somefield1==b.somefield1).
\n
soup wrap:
You can combine columns from both tables using (id,name) as the joining criteria with:
select
a.id as id,
a.name as name,
a.somefield1 || ' ' || b.somefied1 as somefield1
from tablea a, tableb b
where a.id = b.id
and a.name = b.name
and b.name = 'mooseburgers';
If you want to join on just the (id) and combine the name and somefield1 columns:
select
a.id as id,
a.name || ' ' || b.name as name,
a.somefield1 || ' ' || b.somefied1 as somefield1
from tablea a, tableb b
where a.id = b.id
and b.name = 'mooseburgers';
Although I have to admit this is a rather unusual way of doing things. I assume you have your reasons however :-)
If I've misunderstood your question and you just want a more conventional union of the two tables, use something like:
select id, name, somefield1, '' as somefield2 from tablea where name = 'mooseburgers'
union all
select id, name, somefield1, somefield2 from tableb where name = 'mooseburgers'
This won't combine rows but will instead just append the rows from the two queries. Use union on its own if you want to remove duplicate rows but, if you're certain there are no duplicates or you don't want them removed, union all is often more efficient.
Based on your edit, the actual query would be:
select name, somefield1 from tablea where name = 'zoot'
union all
select name, somefield1 from tableb where name = 'zoot'
(or union if you don't want duplicates where a.name==b.name=='zoot' and a.somefield1==b.somefield1).
qid & accept id:
(2919168, 2920858)
query:
Invoking a function call in a string in an Oracle Procedure
soup:
It's easy enough to dynamically execute a string ...
\ncreate or replace function fmt_fname (p_dyn_string in varchar2)\n return varchar2\nis\n return_value varchar2(128);\nbegin\n execute immediate 'select '||p_dyn_string||' from dual'\n into return_value;\n return return_value;\nend fmt_fname;\n/\n
\nThe problem arises where your string contains literals, with the dreaded quotes ...
\nSQL> select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual\n 2 /\nselect fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual\n *\nERROR at line 1:\nORA-00907: missing right parenthesis\n\n\nSQL>\n
\nSo we have to escape the apostrophes, all of them, including the ones you haven't included in your posted string:
\nSQL> select * from t34\n 2 /\n\n ID FILENAME\n---------- ------------------------------\n 1 APC001\n 2 XYZ213\n 3 TEST147\n\n\nSQL> select * from t34\n 2 where filename = fmt_fname('''TEST''||to_char(sysdate, ''DDD'')')\n 3 /\n\n ID FILENAME\n---------- ------------------------------\n 3 TEST147\n\nSQL>\n
\nEDIT
\nJust for the sake of fairness I feel I should point out that Tony's solution works just as well:
\nSQL> create or replace function fmt_fname (p_dyn_string in varchar2)\n 2 return varchar2\n 3 is\n 4 return_value varchar2(128);\n 5 begin\n 6 execute immediate 'begin :result := ' || p_dyn_string || '; end;'\n 7 using out return_value;\n 8 return return_value;\n 9 end;\n 10 /\n\nFunction created.\n\nSQL> select fmt_fname('''TEST''||to_char(sysdate, ''DDD'')') from dual\n 2 /\n\nFMT_FNAME('''TEST''||TO_CHAR(SYSDATE,''DDD'')')\n--------------------------------------------------------------------------------\nTEST147\n\nSQL>\n
\nIn fact, by avoiding the SELECT on DUAL it is probably better.
\n
soup wrap:
It's easy enough to dynamically execute a string ...
create or replace function fmt_fname (p_dyn_string in varchar2)
return varchar2
is
return_value varchar2(128);
begin
execute immediate 'select '||p_dyn_string||' from dual'
into return_value;
return return_value;
end fmt_fname;
/
The problem arises where your string contains literals, with the dreaded quotes ...
SQL> select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual
2 /
select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual
*
ERROR at line 1:
ORA-00907: missing right parenthesis
SQL>
So we have to escape the apostrophes, all of them, including the ones you haven't included in your posted string:
SQL> select * from t34
2 /
ID FILENAME
---------- ------------------------------
1 APC001
2 XYZ213
3 TEST147
SQL> select * from t34
2 where filename = fmt_fname('''TEST''||to_char(sysdate, ''DDD'')')
3 /
ID FILENAME
---------- ------------------------------
3 TEST147
SQL>
EDIT
Just for the sake of fairness I feel I should point out that Tony's solution works just as well:
SQL> create or replace function fmt_fname (p_dyn_string in varchar2)
2 return varchar2
3 is
4 return_value varchar2(128);
5 begin
6 execute immediate 'begin :result := ' || p_dyn_string || '; end;'
7 using out return_value;
8 return return_value;
9 end;
10 /
Function created.
SQL> select fmt_fname('''TEST''||to_char(sysdate, ''DDD'')') from dual
2 /
FMT_FNAME('''TEST''||TO_CHAR(SYSDATE,''DDD'')')
--------------------------------------------------------------------------------
TEST147
SQL>
In fact, by avoiding the SELECT on DUAL it is probably better.
qid & accept id:
(2922856, 2922972)
query:
mysql: how to change column to be PK Auto_Increment
soup:
Here we create a little table:
\nmysql> CREATE TABLE test2 (id int);\n
\nNote Null is YES, and id is not a primary key, nor does it auto_increment.
\nmysql> DESCRIBE test2;\n+-------+---------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+---------+------+-----+---------+-------+\n| id | int(11) | YES | | NULL | | \n+-------+---------+------+-----+---------+-------+\n1 row in set (0.00 sec)\n
\nHere is the alter command:
\nmysql> ALTER TABLE test2 MODIFY COLUMN id INT NOT NULL auto_increment, ADD primary key (id);\n
\nNow Null is NO, and id is a primary key with auto_increment.
\nmysql> describe test2;\ndescribe test2;\n+-------+---------+------+-----+---------+----------------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+---------+------+-----+---------+----------------+\n| id | int(11) | NO | PRI | NULL | auto_increment | \n+-------+---------+------+-----+---------+----------------+\n1 row in set (0.00 sec)\n
\nPrimary keys are always unique.
\n
soup wrap:
Here we create a little table:
mysql> CREATE TABLE test2 (id int);
Note Null is YES, and id is not a primary key, nor does it auto_increment.
mysql> DESCRIBE test2;
+-------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
+-------+---------+------+-----+---------+-------+
1 row in set (0.00 sec)
Here is the alter command:
mysql> ALTER TABLE test2 MODIFY COLUMN id INT NOT NULL auto_increment, ADD primary key (id);
Now Null is NO, and id is a primary key with auto_increment.
mysql> describe test2;
describe test2;
+-------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
+-------+---------+------+-----+---------+----------------+
1 row in set (0.00 sec)
Primary keys are always unique.
qid & accept id:
(2930768, 2930818)
query:
How to compare sqlite TIMESTAMP values
soup:
The issue is with the way you've inserted data into your table: the +0200 syntax doesn't match any of SQLite's time formats:
\n\n- YYYY-MM-DD
\n- YYYY-MM-DD HH:MM
\n- YYYY-MM-DD HH:MM:SS
\n- YYYY-MM-DD HH:MM:SS.SSS
\n- YYYY-MM-DDTHH:MM
\n- YYYY-MM-DDTHH:MM:SS
\n- YYYY-MM-DDTHH:MM:SS.SSS
\n- HH:MM
\n- HH:MM:SS
\n- HH:MM:SS.SSS
\n- now
\n- DDDDDDDDDD
\n
\nChanging it to use the SS.SSS format works correctly:
\nsqlite> CREATE TABLE Foo (created_at TIMESTAMP);\nsqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56+0200');\nsqlite> SELECT * FROM Foo WHERE foo.created_at < '2010-05-28 16:20:55';\nsqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';\nsqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56.200');\nsqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';\n2010-05-28T15:36:56.200\n
\nIf you absolutely can't change the format when it is inserted, you might have to fall back to doing something "clever" and modifying the actual string (i.e. to replace the + with a ., etc.).
\n
\n(original answer)
\nYou haven't described what kind of data is contained in your CREATED_AT column. If it indeed a datetime, it will compare correctly against a string:
\nsqlite> SELECT DATETIME('now');\n2010-05-28 16:33:10\nsqlite> SELECT DATETIME('now') < '2011-01-01 00:00:00';\n1\n
\nIf it is stored as a unix timestamp, you need to call DATETIME function with the second argument as 'unixepoch' to compare against a string:
\nsqlite> SELECT DATETIME(0, 'unixepoch');\n1970-01-01 00:00:00\nsqlite> SELECT DATETIME(0, 'unixepoch') < '2010-01-01 00:00:00';\n1\nsqlite> SELECT DATETIME(0, 'unixepoch') == DATETIME('1970-01-01 00:00:00');\n1\n
\nIf neither of those solve your problem (and even if they do!) you should always post some data so that other people can reproduce your problem. You should even feel free to come up with a subset of your original data that still reproduces the problem.
\n
soup wrap:
The issue is with the way you've inserted data into your table: the +0200 syntax doesn't match any of SQLite's time formats:
- YYYY-MM-DD
- YYYY-MM-DD HH:MM
- YYYY-MM-DD HH:MM:SS
- YYYY-MM-DD HH:MM:SS.SSS
- YYYY-MM-DDTHH:MM
- YYYY-MM-DDTHH:MM:SS
- YYYY-MM-DDTHH:MM:SS.SSS
- HH:MM
- HH:MM:SS
- HH:MM:SS.SSS
- now
- DDDDDDDDDD
Changing it to use the SS.SSS format works correctly:
sqlite> CREATE TABLE Foo (created_at TIMESTAMP);
sqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56+0200');
sqlite> SELECT * FROM Foo WHERE foo.created_at < '2010-05-28 16:20:55';
sqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';
sqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56.200');
sqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';
2010-05-28T15:36:56.200
If you absolutely can't change the format when it is inserted, you might have to fall back to doing something "clever" and modifying the actual string (i.e. to replace the + with a ., etc.).
(original answer)
You haven't described what kind of data is contained in your CREATED_AT column. If it indeed a datetime, it will compare correctly against a string:
sqlite> SELECT DATETIME('now');
2010-05-28 16:33:10
sqlite> SELECT DATETIME('now') < '2011-01-01 00:00:00';
1
If it is stored as a unix timestamp, you need to call DATETIME function with the second argument as 'unixepoch' to compare against a string:
sqlite> SELECT DATETIME(0, 'unixepoch');
1970-01-01 00:00:00
sqlite> SELECT DATETIME(0, 'unixepoch') < '2010-01-01 00:00:00';
1
sqlite> SELECT DATETIME(0, 'unixepoch') == DATETIME('1970-01-01 00:00:00');
1
If neither of those solve your problem (and even if they do!) you should always post some data so that other people can reproduce your problem. You should even feel free to come up with a subset of your original data that still reproduces the problem.
qid & accept id:
(2945765, 2946013)
query:
Determining SQL MERGE statement result
soup:
What you could do is create a temporary table (or a table variable) and send your output there - add some meaningful fields to your OUTPUT clause to make it clear what row was \naffected by what action:
\nDECLARE @OutputTable TABLE (Guid UNIQUEIDENTIFIER, Action VARCHAR(100))\n\nMERGE INTO TestTable as target\nUSING ( select '00D81CB4EA0842EF9E158BB8FEC48A1E' )\nAS source (Guid)\nON ( target.Guid = source.Guid ) \nWHEN MATCHED THEN\nUPDATE SET Test_Column = NULL\nWHEN NOT MATCHED THEN\nINSERT (Guid, Test_Column) VALUES ('00D81CB4EA0842EF9E158BB8FEC48A1E', NULL)\nOUTPUT INSERTED.Guid, $action INTO @OutputTable\n\nSELECT\n Guid, Action\nFROM\n @OutputTable\n
\nUPDATE: ah, okay, so you want to call this from .NET ! Well, in that case, just call it using the .ExecuteReader() method on your SqlCommand object - the stuff you're outputting using OUTPUT... will be returned to the .NET caller as a result set - you can loop through that:
\nusing(SqlCommand cmd = new SqlCommand(mergeStmt, connection))\n{\n connection.Open();\n\n using(SqlDataReader rdr = cmd.ExecuteReader())\n {\n while(rdr.Read())\n {\n var outputAction = rdr.GetValue(0);\n }\n\n rdr.Close();\n }\n connection.Close();\n}\n
\nYou should get back the resulting "$action" from that data reader.
\n
soup wrap:
What you could do is create a temporary table (or a table variable) and send your output there - add some meaningful fields to your OUTPUT clause to make it clear what row was
affected by what action:
DECLARE @OutputTable TABLE (Guid UNIQUEIDENTIFIER, Action VARCHAR(100))
MERGE INTO TestTable as target
USING ( select '00D81CB4EA0842EF9E158BB8FEC48A1E' )
AS source (Guid)
ON ( target.Guid = source.Guid )
WHEN MATCHED THEN
UPDATE SET Test_Column = NULL
WHEN NOT MATCHED THEN
INSERT (Guid, Test_Column) VALUES ('00D81CB4EA0842EF9E158BB8FEC48A1E', NULL)
OUTPUT INSERTED.Guid, $action INTO @OutputTable
SELECT
Guid, Action
FROM
@OutputTable
UPDATE: ah, okay, so you want to call this from .NET ! Well, in that case, just call it using the .ExecuteReader() method on your SqlCommand object - the stuff you're outputting using OUTPUT... will be returned to the .NET caller as a result set - you can loop through that:
using(SqlCommand cmd = new SqlCommand(mergeStmt, connection))
{
connection.Open();
using(SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
var outputAction = rdr.GetValue(0);
}
rdr.Close();
}
connection.Close();
}
You should get back the resulting "$action" from that data reader.
qid & accept id:
(2978700, 2978764)
query:
Calculate running total in SQLite table using triggers
soup:
\nPlease check the value of SQLITE_MAX_TRIGGER_DEPTH. Could it be set to 1 instead of default 1000?
\nPlease check your SQLite version. Before 3.6.18, recursive triggers were not supported.
\n
\nPlease note that the following worked for me 100% OK
\ndrop table "AccountBalances"
\nCREATE TEMP TABLE "AccountBalances" (\n "Id" INTEGER PRIMARY KEY, \n "Balance" REAL);\n\nINSERT INTO "AccountBalances" values (1,0)\nINSERT INTO "AccountBalances" values (2,0);\nINSERT INTO "AccountBalances" values (3,0);\nINSERT INTO "AccountBalances" values (4,0);\nINSERT INTO "AccountBalances" values (5,0);\nINSERT INTO "AccountBalances" values (6,0);\n\nCREATE TRIGGER UpdateAccountBalance AFTER UPDATE ON AccountBalances\nBEGIN\n UPDATE AccountBalances \n SET Balance = 1 + new.Balance \n WHERE Id = new.Id + 1;\nEND;\n\nPRAGMA recursive_triggers = 'on';\n\nUPDATE AccountBalances \n SET Balance = 1 \n WHERE Id = 1\n\nselect * from "AccountBalances";\n
\nResulted in:
\nId Balance\n1 1\n2 2\n3 3\n4 4\n5 5\n6 6\n
\n
soup wrap:
Please check the value of SQLITE_MAX_TRIGGER_DEPTH. Could it be set to 1 instead of default 1000?
Please check your SQLite version. Before 3.6.18, recursive triggers were not supported.
Please note that the following worked for me 100% OK
drop table "AccountBalances"
CREATE TEMP TABLE "AccountBalances" (
"Id" INTEGER PRIMARY KEY,
"Balance" REAL);
INSERT INTO "AccountBalances" values (1,0)
INSERT INTO "AccountBalances" values (2,0);
INSERT INTO "AccountBalances" values (3,0);
INSERT INTO "AccountBalances" values (4,0);
INSERT INTO "AccountBalances" values (5,0);
INSERT INTO "AccountBalances" values (6,0);
CREATE TRIGGER UpdateAccountBalance AFTER UPDATE ON AccountBalances
BEGIN
UPDATE AccountBalances
SET Balance = 1 + new.Balance
WHERE Id = new.Id + 1;
END;
PRAGMA recursive_triggers = 'on';
UPDATE AccountBalances
SET Balance = 1
WHERE Id = 1
select * from "AccountBalances";
Resulted in:
Id Balance
1 1
2 2
3 3
4 4
5 5
6 6
qid & accept id:
(3005323, 3005737)
query:
How can I manage a FIFO-queue in an database with SQL?
soup:
Reading the comments you say that you are willing to add a auto increment or date field to know the proper position of each row. Once you add this I would recommend adding one more row to the In table called processed which is automatically set to false when the row is added to the table. Any rows that have been copied to OUT already have their processed filed set to true.
\n+----+\n| In |\n+-----------+-----------+-------+-----------+\n| AUtoId | Supply_ID | Price | Processed |\n+-----------+-----------+-------+-----------+\n| 1 | 1 | 75 | 1 |\n| 2 | 1 | 75 | 1 |\n| 3 | 1 | 75 | 0 |\n| 4 | 2 | 80 | 0 |\n| 5 | 2 | 80 | 0 |\n+-----------+-----------+-------+---------- +\n
\nThen to find the next item to move to OUT you can do
\nSELECT TOP 1 Supply_ID, Price \nFROM In WHERE Processed = 0\nORDER BY [Your Auto Increment Field or Date]\n
\nOnce the row is moved to OUT then you just UPDATE the processed field of that row to true.
\n
soup wrap:
Reading the comments you say that you are willing to add a auto increment or date field to know the proper position of each row. Once you add this I would recommend adding one more row to the In table called processed which is automatically set to false when the row is added to the table. Any rows that have been copied to OUT already have their processed filed set to true.
+----+
| In |
+-----------+-----------+-------+-----------+
| AUtoId | Supply_ID | Price | Processed |
+-----------+-----------+-------+-----------+
| 1 | 1 | 75 | 1 |
| 2 | 1 | 75 | 1 |
| 3 | 1 | 75 | 0 |
| 4 | 2 | 80 | 0 |
| 5 | 2 | 80 | 0 |
+-----------+-----------+-------+---------- +
Then to find the next item to move to OUT you can do
SELECT TOP 1 Supply_ID, Price
FROM In WHERE Processed = 0
ORDER BY [Your Auto Increment Field or Date]
Once the row is moved to OUT then you just UPDATE the processed field of that row to true.
qid & accept id:
(3035105, 3035145)
query:
Self join to a table
soup:
select e1.* from Employee e1, Employee e2 where \n e2.name = 'a' and\n e1.salary > e2.salary\n
\nUsing self join
\n select e1.* from Employee e1 join Employee e2 on \n e2.name = 'a' and\n e1.salary > e2.salary\n
\n
soup wrap:
select e1.* from Employee e1, Employee e2 where
e2.name = 'a' and
e1.salary > e2.salary
Using self join
select e1.* from Employee e1 join Employee e2 on
e2.name = 'a' and
e1.salary > e2.salary
qid & accept id:
(3053125, 3053404)
query:
Shrinking database
soup:
Firstly, if you can avoid shrinking a production database then do so. Buying additional disk storage is almost always the more practical solution in the long run.
\nThere is a reason that your database data/log files have grown to their current size and unless you have purged data from your database then it is very likely (if not a certainty) that your database will grow to the current size once again, post shrink exercise.
\nWith this in mind you should look to identify the cause of your database growth.
\nFinally, if you absolutely must shrink your database, choose the time to do so wisely, i.e. perform this maintenance at a time when your live system typically experiences lower workload. Shrinking data files causes a significant amount of disk I/O, especially if the data pages are to be reorganized.
\nThen identify which data files or log files contain the most free space and target these to be shrunk individually. There is no point in performing a database wide shrink exercise if for example it is only the log file that has a significant amount of free space.
\nIn order to do this, consult the documentation for the DBCC SHRINKFILE command.
\nUseful Information:
\nIndentify the amount of free space in the database overall:
\nEXEC sp_spaceused\n
\nIdentify the amount of free log space:
\nDBCC SQLPERF('logspace')\n
\nIdentify the amount of free space per data/log file:
\nSELECT \n name AS 'File Name' , \n physical_name AS 'Physical Name', \n size/128 AS 'Total Size in MB',\n size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',\n *\nFROM sys.database_files;\n
\n
soup wrap:
Firstly, if you can avoid shrinking a production database then do so. Buying additional disk storage is almost always the more practical solution in the long run.
There is a reason that your database data/log files have grown to their current size and unless you have purged data from your database then it is very likely (if not a certainty) that your database will grow to the current size once again, post shrink exercise.
With this in mind you should look to identify the cause of your database growth.
Finally, if you absolutely must shrink your database, choose the time to do so wisely, i.e. perform this maintenance at a time when your live system typically experiences lower workload. Shrinking data files causes a significant amount of disk I/O, especially if the data pages are to be reorganized.
Then identify which data files or log files contain the most free space and target these to be shrunk individually. There is no point in performing a database wide shrink exercise if for example it is only the log file that has a significant amount of free space.
In order to do this, consult the documentation for the DBCC SHRINKFILE command.
Useful Information:
Indentify the amount of free space in the database overall:
EXEC sp_spaceused
Identify the amount of free log space:
DBCC SQLPERF('logspace')
Identify the amount of free space per data/log file:
SELECT
name AS 'File Name' ,
physical_name AS 'Physical Name',
size/128 AS 'Total Size in MB',
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',
*
FROM sys.database_files;
qid & accept id:
(3084672, 3084703)
query:
TSQL Howto get count of unique users?
soup:
You can;
\nSELECT COUNT(DISTINCT userID) \nFROM Tbl\n
\nYou can give the count column a name by aliasing it:
\nSELECT COUNT(DISTINCT userID) NumberOfDistinctUsers\nFROM Tbl\n
\n
soup wrap:
You can;
SELECT COUNT(DISTINCT userID)
FROM Tbl
You can give the count column a name by aliasing it:
SELECT COUNT(DISTINCT userID) NumberOfDistinctUsers
FROM Tbl
qid & accept id:
(3167775, 3167957)
query:
SQL - Grab Detail Rows as Columns in Join
soup:
select \n C.ACCOUNTNO,\n C.CONTACT,\n C.KEY1,\n C.KEY4, \n HichschoolCS.State as HighSchool, \n TestSatCS.state as Test\n\n\nfrom \n contact1 C\n left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno \n and HichschoolCS.contact = 'High School'\n left join CONTSUPP TestSatCS on C.accountno=TestSatCS.accountno \n and TestSatCS.contact = 'Test/SAT'\nwhere \n C.KEY1!='00PRSP' \n AND (C.U_KEY2='2009 FALL' \n OR C.U_KEY2='2010 SPRING' \n OR C.U_KEY2='2010 J TERM' \n OR C.U_KEY2='2010 SUMMER')\n
\nUpdate: Added example of only using the highest SAT score
\nselect \n C.ACCOUNTNO,\n C.CONTACT,\n C.KEY1,\n C.KEY4, \n HichschoolCS.State as HighSchool, \n TestSatCS.state as Test\n\n\nfrom \n contact1 C\n left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno \n and HichschoolCS.contact = 'High School'\n left join (SELECT MAX(state) state, \n accountno\n FROM\n CONTSUPP TestSatCS \n WHERE \n contact = 'Test/SAT'\n GROUP\n accountno) TestSatCS\n on C.accountno=TestSatCS.accountno \n\nwhere \n C.KEY1!='00PRSP' \n AND (C.U_KEY2='2009 FALL' \n OR C.U_KEY2='2010 SPRING' \n OR C.U_KEY2='2010 J TERM' \n OR C.U_KEY2='2010 SUMMER')\n
\n
soup wrap:
select
C.ACCOUNTNO,
C.CONTACT,
C.KEY1,
C.KEY4,
HichschoolCS.State as HighSchool,
TestSatCS.state as Test
from
contact1 C
left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno
and HichschoolCS.contact = 'High School'
left join CONTSUPP TestSatCS on C.accountno=TestSatCS.accountno
and TestSatCS.contact = 'Test/SAT'
where
C.KEY1!='00PRSP'
AND (C.U_KEY2='2009 FALL'
OR C.U_KEY2='2010 SPRING'
OR C.U_KEY2='2010 J TERM'
OR C.U_KEY2='2010 SUMMER')
Update: Added example of only using the highest SAT score
select
C.ACCOUNTNO,
C.CONTACT,
C.KEY1,
C.KEY4,
HichschoolCS.State as HighSchool,
TestSatCS.state as Test
from
contact1 C
left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno
and HichschoolCS.contact = 'High School'
left join (SELECT MAX(state) state,
accountno
FROM
CONTSUPP TestSatCS
WHERE
contact = 'Test/SAT'
GROUP
accountno) TestSatCS
on C.accountno=TestSatCS.accountno
where
C.KEY1!='00PRSP'
AND (C.U_KEY2='2009 FALL'
OR C.U_KEY2='2010 SPRING'
OR C.U_KEY2='2010 J TERM'
OR C.U_KEY2='2010 SUMMER')
qid & accept id:
(3240290, 3240324)
query:
How to find rows where a set of numbers is between two numbers?
soup:
Using a JOIN, but risks duplicates:
\nSELECT t.*\n FROM TABLE1 t\n JOIN (SELECT Sequence FROM Table1 WHERE Hash=2783342) x ON x.sequence BETWEEN t.sequence \n AND t.sequenceend\n
\nUsing EXISTS, no duplicate risk:
\nSELECT t.*\n FROM TABLE1 t\n WHERE EXISTS(SELECT NULL\n FROM TABLE1 x\n WHERE x.hash = 2783342\n AND x.sequence BETWEEN t.sequence \n AND t.sequenceend)\n
\n
soup wrap:
Using a JOIN, but risks duplicates:
SELECT t.*
FROM TABLE1 t
JOIN (SELECT Sequence FROM Table1 WHERE Hash=2783342) x ON x.sequence BETWEEN t.sequence
AND t.sequenceend
Using EXISTS, no duplicate risk:
SELECT t.*
FROM TABLE1 t
WHERE EXISTS(SELECT NULL
FROM TABLE1 x
WHERE x.hash = 2783342
AND x.sequence BETWEEN t.sequence
AND t.sequenceend)
qid & accept id:
(3244796, 3244864)
query:
Stored procedure - Passing a parameter as xml and reading the data
soup:
You just need a WHERE clause I think.
\n INSERT INTO SN_IO ( [C1] ,[C2] ,[C3] )\n SELECT [C1] ,[C2] ,[C3]\n FROM OPENXML (@currRecord, 'ios/io', 1)\n WITH ([C1] [varchar](25) 'C1',\n [C2] [varchar](25) 'C2',\n [C3] [varchar](20) 'C3' ) \n WHERE [C1] IS NOT NULL AND [C2] IS NOT NULL AND [C3] IS NOT NULL \n
\nOr you can do it in the XPath instead which I guess may be more efficient
\n FROM OPENXML (@currRecord, 'ios/io[C1 and C2 and C3]', 1)\n
\n
soup wrap:
You just need a WHERE clause I think.
INSERT INTO SN_IO ( [C1] ,[C2] ,[C3] )
SELECT [C1] ,[C2] ,[C3]
FROM OPENXML (@currRecord, 'ios/io', 1)
WITH ([C1] [varchar](25) 'C1',
[C2] [varchar](25) 'C2',
[C3] [varchar](20) 'C3' )
WHERE [C1] IS NOT NULL AND [C2] IS NOT NULL AND [C3] IS NOT NULL
Or you can do it in the XPath instead which I guess may be more efficient
FROM OPENXML (@currRecord, 'ios/io[C1 and C2 and C3]', 1)
qid & accept id:
(3296390, 3296777)
query:
Enforcing uniqueness on PostgreSQL table column after non-unique values already inserted
soup:
The query you're looking for is:
\nselect distinct on (my_unique_1, my_unique_2) * from my_table;\n
\nThis selects one row for each combination of columns within distinct on. Actually, it's always the first row. It's rarely used without order by since there is no reliable order in which the rows are returned (and so which is the first one).
\nCombined with order by you can choose which rows are the first (this leaves rows with the greatest last_update_date):
\n select distinct on (my_unique_1, my_unique_2) * \n from my_table order by my_unique_1, my_unique_2, last_update_date desc;\n
\nNow you can select this into a new table:
\n create table my_new_table as\n select distinct on (my_unique_1, my_unique_2) * \n from my_table order by my_unique_1, my_unique_2, last_update_date desc;\n
\nOr you can use it for delete, assuming row_id is a primary key:
\n delete from my_table where row_id not in (\n select distinct on (my_unique_1, my_unique_2) row_id \n from my_table order by my_unique_1, my_unique_2, last_update_date desc);\n
\n
soup wrap:
The query you're looking for is:
select distinct on (my_unique_1, my_unique_2) * from my_table;
This selects one row for each combination of columns within distinct on. Actually, it's always the first row. It's rarely used without order by since there is no reliable order in which the rows are returned (and so which is the first one).
Combined with order by you can choose which rows are the first (this leaves rows with the greatest last_update_date):
select distinct on (my_unique_1, my_unique_2) *
from my_table order by my_unique_1, my_unique_2, last_update_date desc;
Now you can select this into a new table:
create table my_new_table as
select distinct on (my_unique_1, my_unique_2) *
from my_table order by my_unique_1, my_unique_2, last_update_date desc;
Or you can use it for delete, assuming row_id is a primary key:
delete from my_table where row_id not in (
select distinct on (my_unique_1, my_unique_2) row_id
from my_table order by my_unique_1, my_unique_2, last_update_date desc);
qid & accept id:
(3317750, 3317795)
query:
Counting all other types but the current one
soup:
You can do one query to get the distinct types, and LEFT JOIN the same table, checking for type-inequality:
\nSELECT t1.type,\n SUM(t2.some_value) / COUNT(t2.type)\nFROM ( SELECT DISTINCT type FROM temptable ) t1\nLEFT JOIN temptable t2 ON ( t1.type <> t2.type )\nGROUP BY t1.type\n
\nSince you only want the average, you could replace the line
\nFROM ( SELECT DISTINCT type FROM temptable ) t1\n
\nby
\nFROM temptable t1\n
\nbut the first solution might perform better, since the number of rows is reduced earlier.
\n
soup wrap:
You can do one query to get the distinct types, and LEFT JOIN the same table, checking for type-inequality:
SELECT t1.type,
SUM(t2.some_value) / COUNT(t2.type)
FROM ( SELECT DISTINCT type FROM temptable ) t1
LEFT JOIN temptable t2 ON ( t1.type <> t2.type )
GROUP BY t1.type
Since you only want the average, you could replace the line
FROM ( SELECT DISTINCT type FROM temptable ) t1
by
FROM temptable t1
but the first solution might perform better, since the number of rows is reduced earlier.
qid & accept id:
(3318852, 3318885)
query:
what is the quickest way to run a query to find where 2 fields are the same
soup:
EDIT
\nConcatenation will give out false answers as pointed out in the comments ('Roberto Neil' vs 'Robert ONeil'.
\nHere is an answer that eliminates the concatenation issue. I found out the non duplicates and eliminated them from the final answer.
\nWITH MyTable AS\n(\n SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName\n UNION\n SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName\n UNION\n SELECT 3 as ID, 'Tim' as FirstName, 'Doe' as LastName\n UNION\n SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName\n UNION\n SELECT 5 as ID, 'Jane' as FirstName, 'Doe' as LastName\n)\nSELECT Id, FirstName, LastName\nFROM MyTable SelectTable\nWHERE Id Not In\n(\n SELECT Min (Id)\n From MyTable SearchTable\n GROUP BY FirstName, LastName\n HAVING COUNT (*) = 1\n)\n
\n
\nOLD SOLUTION
\nUse GROUP BY and HAVING.. check out this working sample
\nWITH MyTable AS\n(\nSELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName\nUNION\nSELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName\nUNION\nSELECT 3 as ID, 'Time' as FirstName, 'Doe' as LastName\nUNION\nSELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName\n)\nSELECT ID, FirstName, LastName\nFROM MyTable\nWHERE FirstName + LastName IN\n(\n SELECT FirstName + LastName\n FROM MyTable\n GROUP BY FirstName + LastName\n HAVING COUNT (*) > 1\n)\n
\nThis will result in the following
\nID FirstName LastName\n----------- --------- --------\n1 John Doe\n2 John Doe\n
\n
soup wrap:
EDIT
Concatenation will give out false answers as pointed out in the comments ('Roberto Neil' vs 'Robert ONeil'.
Here is an answer that eliminates the concatenation issue. I found out the non duplicates and eliminated them from the final answer.
WITH MyTable AS
(
SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 3 as ID, 'Tim' as FirstName, 'Doe' as LastName
UNION
SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName
UNION
SELECT 5 as ID, 'Jane' as FirstName, 'Doe' as LastName
)
SELECT Id, FirstName, LastName
FROM MyTable SelectTable
WHERE Id Not In
(
SELECT Min (Id)
From MyTable SearchTable
GROUP BY FirstName, LastName
HAVING COUNT (*) = 1
)
OLD SOLUTION
Use GROUP BY and HAVING.. check out this working sample
WITH MyTable AS
(
SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 3 as ID, 'Time' as FirstName, 'Doe' as LastName
UNION
SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName
)
SELECT ID, FirstName, LastName
FROM MyTable
WHERE FirstName + LastName IN
(
SELECT FirstName + LastName
FROM MyTable
GROUP BY FirstName + LastName
HAVING COUNT (*) > 1
)
This will result in the following
ID FirstName LastName
----------- --------- --------
1 John Doe
2 John Doe
qid & accept id:
(3332230, 3332280)
query:
I need to know how i can write IF statements and CASE break statements that use and execute queries, etc in MySQL?
soup:
To my knowledge, MySQL doesn't support a table valued data type. The use of the function you posted would be:
\nSELECT simplecompare(yt.n, yt.m) AS eval\n FROM YOUR_TABE yt\n
\n...which would return:
\neval\n--------\n1 = 1\n2 < 3\netc.\n
\nSQL is set based, which is different from typical programming (procedural or OO).
\n
soup wrap:
To my knowledge, MySQL doesn't support a table valued data type. The use of the function you posted would be:
SELECT simplecompare(yt.n, yt.m) AS eval
FROM YOUR_TABE yt
...which would return:
eval
--------
1 = 1
2 < 3
etc.
SQL is set based, which is different from typical programming (procedural or OO).
qid & accept id:
(3345268, 3345450)
query:
How to delete completely duplicate rows
soup:
Try this - it will delete all duplicates from your table:
\n;WITH duplicates AS\n(\n SELECT \n ProductID, ProductName, Description, Category,\n ROW_NUMBER() OVER (PARTITION BY ProductID, ProductName\n ORDER BY ProductID) 'RowNum'\n FROM dbo.tblProduct\n)\nDELETE FROM duplicates\nWHERE RowNum > 1\nGO\n\nSELECT * FROM dbo.tblProduct\nGO\n
\nYour duplicates should be gone now: output is:
\nProductID ProductName DESCRIPTION Category\n 1 Cinthol cosmetic soap soap\n 1 Lux cosmetic soap soap\n 1 Crowning Glory cosmetic soap soap\n 2 Cinthol nice soap soap\n 3 Lux nice soap soap\n
\n
soup wrap:
Try this - it will delete all duplicates from your table:
;WITH duplicates AS
(
SELECT
ProductID, ProductName, Description, Category,
ROW_NUMBER() OVER (PARTITION BY ProductID, ProductName
ORDER BY ProductID) 'RowNum'
FROM dbo.tblProduct
)
DELETE FROM duplicates
WHERE RowNum > 1
GO
SELECT * FROM dbo.tblProduct
GO
Your duplicates should be gone now: output is:
ProductID ProductName DESCRIPTION Category
1 Cinthol cosmetic soap soap
1 Lux cosmetic soap soap
1 Crowning Glory cosmetic soap soap
2 Cinthol nice soap soap
3 Lux nice soap soap
qid & accept id:
(3361768, 3361804)
query:
Copy data from one column to other column (which is in a different table)
soup:
In SQL Server 2008 you can use a multi-table update as follows:
\nUPDATE tblindiantime \nSET tblindiantime.CountryName = contacts.BusinessCountry\nFROM tblindiantime \nJOIN contacts\nON -- join condition here\n
\nYou need a join condition to specify which row should be updated.
\nIf the target table is currently empty then you should use an INSERT instead:
\nINSERT INTO tblindiantime (CountryName)\nSELECT BusinessCountry FROM contacts\n
\n
soup wrap:
In SQL Server 2008 you can use a multi-table update as follows:
UPDATE tblindiantime
SET tblindiantime.CountryName = contacts.BusinessCountry
FROM tblindiantime
JOIN contacts
ON -- join condition here
You need a join condition to specify which row should be updated.
If the target table is currently empty then you should use an INSERT instead:
INSERT INTO tblindiantime (CountryName)
SELECT BusinessCountry FROM contacts
qid & accept id:
(3426560, 3426580)
query:
SQL Server / 2 select in the same Stored procedure
soup:
That can be done in a single statement:
\nSELECT b.*\n FROM TABLE_B b\n JOIN TABLE_A a ON a.id2 = b.id2\n WHERE a.id1 = @ID1\n
\nBut this means that there will be duplicates if more than one record in TABLE_A relates to a TABLE_B record. In that situation, use EXISTS rather than adding DISTINCT to the previous query:
\nSELECT b.*\n FROM TABLE_B b\n WHERE EXISTS(SELECT NULL\n FROM TABLE_A a\n WHERE a.id2 = b.id2\n AND a.id1 = @ID1)\n
\nThe IN clause is equivalent, but EXISTS will be faster if there are duplicates:
\nSELECT b.*\n FROM TABLE_B b\n WHERE b.id2 IN (SELECT a.id2\n FROM TABLE_A a\n WHERE a.id1 = @ID1)\n
\n
soup wrap:
That can be done in a single statement:
SELECT b.*
FROM TABLE_B b
JOIN TABLE_A a ON a.id2 = b.id2
WHERE a.id1 = @ID1
But this means that there will be duplicates if more than one record in TABLE_A relates to a TABLE_B record. In that situation, use EXISTS rather than adding DISTINCT to the previous query:
SELECT b.*
FROM TABLE_B b
WHERE EXISTS(SELECT NULL
FROM TABLE_A a
WHERE a.id2 = b.id2
AND a.id1 = @ID1)
The IN clause is equivalent, but EXISTS will be faster if there are duplicates:
SELECT b.*
FROM TABLE_B b
WHERE b.id2 IN (SELECT a.id2
FROM TABLE_A a
WHERE a.id1 = @ID1)
qid & accept id:
(3440516, 3440590)
query:
How do I use dynamic SQL to declare a column name derived from a table name?
soup:
This example passes in a table name and a column name:
\nCREATE PROCEDURE A\n ( tab IN VARCHAR2\n , col_name IN VARCHAR2\n ) IS\nBEGIN\n EXECUTE IMMEDIATE 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\nEND A;\n
\nYou need to realise that everything after EXECUTE IMMEDIATE must be a string that contains some valid SQL. A good way to verify this is to set it up in a variable and print it to the screen:
\nCREATE PROCEDURE A\n ( tab IN VARCHAR2\n , col_name IN VARCHAR2\n ) IS\n v_sql VARCHAR2(2000);\nBEGIN\n v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\n DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);\n EXECUTE IMMEDIATE v_sql;\nEND A;\n
\nThis should then display something like the following in SQL Plus:
\n\nSQL=INSERT INTO mytable(mycolumn)\n VALUES(123)
\n
\n(provided server output is turned on).
\nEDIT: Since you want the column name to be a local variable that always has the same value, this could be done as:
\nCREATE PROCEDURE A (tab IN VARCHAR2)\nIS\n col_name VARCHAR2(30) := 'MYCOLUMN';\n v_sql VARCHAR2(2000);\nBEGIN\n v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\n DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);\n EXECUTE IMMEDIATE v_sql;\nEND A;\n
\n
soup wrap:
This example passes in a table name and a column name:
CREATE PROCEDURE A
( tab IN VARCHAR2
, col_name IN VARCHAR2
) IS
BEGIN
EXECUTE IMMEDIATE 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
END A;
You need to realise that everything after EXECUTE IMMEDIATE must be a string that contains some valid SQL. A good way to verify this is to set it up in a variable and print it to the screen:
CREATE PROCEDURE A
( tab IN VARCHAR2
, col_name IN VARCHAR2
) IS
v_sql VARCHAR2(2000);
BEGIN
v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);
EXECUTE IMMEDIATE v_sql;
END A;
This should then display something like the following in SQL Plus:
SQL=INSERT INTO mytable(mycolumn)
VALUES(123)
(provided server output is turned on).
EDIT: Since you want the column name to be a local variable that always has the same value, this could be done as:
CREATE PROCEDURE A (tab IN VARCHAR2)
IS
col_name VARCHAR2(30) := 'MYCOLUMN';
v_sql VARCHAR2(2000);
BEGIN
v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);
EXECUTE IMMEDIATE v_sql;
END A;
qid & accept id:
(3444082, 3444195)
query:
How to filter out similar rows (equal on certain columns) based on other column data
soup:
Assuming SQL Server 2005+, use:
\nSELECT x.id,\n x.forename,\n x.surname,\n x.somedate\n FROM (SELECT t.id,\n t.forename,\n t.surname,\n t.somedate,\n ROW_NUMBER() OVER (PARTITION BY t.forename, t.surname \n ORDER BY t.somedate DESC, t.id DESC) AS rank\n FROM TABLE t_ x\nWHERE x.rank = 1\n
\nA risky approach would be:
\n SELECT MAX(t.id) AS id,\n t.forename,\n t.surname,\n MAX(t.somedate) AS somedate\n FROM TABLE t\nGROUP BY t.forename, t.surname\n
\n
soup wrap:
Assuming SQL Server 2005+, use:
SELECT x.id,
x.forename,
x.surname,
x.somedate
FROM (SELECT t.id,
t.forename,
t.surname,
t.somedate,
ROW_NUMBER() OVER (PARTITION BY t.forename, t.surname
ORDER BY t.somedate DESC, t.id DESC) AS rank
FROM TABLE t_ x
WHERE x.rank = 1
A risky approach would be:
SELECT MAX(t.id) AS id,
t.forename,
t.surname,
MAX(t.somedate) AS somedate
FROM TABLE t
GROUP BY t.forename, t.surname
qid & accept id:
(3482194, 3482211)
query:
Ensuring uniqueness of additions to MySQL table using PHP
soup:
You can make the column that stores the User Agent string unique, and do INSERT ... ON DUPLICATE KEY UPDATE for your stats insertions
\nFor the table:
\n CREATE TABLE IF NOT EXISTS `user_agent_stats` (\n `user_agent` varchar(255) collate utf8_bin NOT NULL,\n `hits` int(21) NOT NULL default '1',\n UNIQUE KEY `user_agent` (`user_agent`)\n) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;\n\n+------------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+------------+--------------+------+-----+---------+-------+\n| user_agent | varchar(255) | NO | PRI | NULL | | \n| hits | int(21) | NO | | NULL | | \n+------------+--------------+------+-----+---------+-------+\n
\nYou could use the following query to insert user agents:
\nINSERT INTO user_agent_stats( user_agent ) VALUES('user agent string') ON DUPLICATE KEY UPDATE hits = hits+1;\n
\nExecuting the above query multiple times gives:
\n+-------------------+------+\n| user_agent | hits |\n+-------------------+------+\n| user agent string | 6 | \n+-------------------+------+\n
\n
soup wrap:
You can make the column that stores the User Agent string unique, and do INSERT ... ON DUPLICATE KEY UPDATE for your stats insertions
For the table:
CREATE TABLE IF NOT EXISTS `user_agent_stats` (
`user_agent` varchar(255) collate utf8_bin NOT NULL,
`hits` int(21) NOT NULL default '1',
UNIQUE KEY `user_agent` (`user_agent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+-------+
| user_agent | varchar(255) | NO | PRI | NULL | |
| hits | int(21) | NO | | NULL | |
+------------+--------------+------+-----+---------+-------+
You could use the following query to insert user agents:
INSERT INTO user_agent_stats( user_agent ) VALUES('user agent string') ON DUPLICATE KEY UPDATE hits = hits+1;
Executing the above query multiple times gives:
+-------------------+------+
| user_agent | hits |
+-------------------+------+
| user agent string | 6 |
+-------------------+------+
qid & accept id:
(3502478, 3502541)
query:
Top User SQL Query With Categories?
soup:
This will return you top 10 users:
\nSELECT u.*,\n (\n SELECT COUNT(*)\n FROM votes v\n WHERE v.receiver_id = u.user_id\n ) AS score\nFROM users u\nORDER BY\n score DESC\nLIMIT 10\n
\nThis will return you one top user from each category:
\nSELECT u.*\nFROM (\n SELECT DISTINCT category_id\n FROM users\n ) uo\nJOIN users u\nON u.user_id = \n (\n SELECT user_id\n FROM users ui\n WHERE ui.category_id = uo.category_id\n ORDER BY\n (\n SELECT COUNT(*)\n FROM votes v\n WHERE v.receiver_id = ui.user_id\n ) DESC\n LIMIT 1\n )\n
\n
soup wrap:
This will return you top 10 users:
SELECT u.*,
(
SELECT COUNT(*)
FROM votes v
WHERE v.receiver_id = u.user_id
) AS score
FROM users u
ORDER BY
score DESC
LIMIT 10
This will return you one top user from each category:
SELECT u.*
FROM (
SELECT DISTINCT category_id
FROM users
) uo
JOIN users u
ON u.user_id =
(
SELECT user_id
FROM users ui
WHERE ui.category_id = uo.category_id
ORDER BY
(
SELECT COUNT(*)
FROM votes v
WHERE v.receiver_id = ui.user_id
) DESC
LIMIT 1
)
qid & accept id:
(3526673, 3526711)
query:
getting all chars before space in SQL SERVER
soup:
Select Substring( MyTextColumn, 1, CharIndex( ' ', MyTextColumn ) - 1)\n
\nActually, if these are datetime values, then there is a better way:
\nSelect Cast(DateDiff(d, 0, MyDateColumn) As datetime)\n
\n
soup wrap:
Select Substring( MyTextColumn, 1, CharIndex( ' ', MyTextColumn ) - 1)
Actually, if these are datetime values, then there is a better way:
Select Cast(DateDiff(d, 0, MyDateColumn) As datetime)
qid & accept id:
(3550497, 3550522)
query:
SQL Server - Give a Login Permission for Read Access to All Existing and Future Databases
soup:
For new databases, add the user in the model database. This is used as the template for all new databases.
\nUSE model\nCREATE USER ... FROM LOGIN...\nEXEC sp_addrolemember 'db_datareader', '...'\n
\nFor existing databases, use sp_MSForEachDb
\nEXEC sp_MSForEachDb '\n USE ?\n CREATE USER ... FROM LOGIN... \n EXEC sp_addrolemember ''db_datareader'', ''...''\n'\n
\n
soup wrap:
For new databases, add the user in the model database. This is used as the template for all new databases.
USE model
CREATE USER ... FROM LOGIN...
EXEC sp_addrolemember 'db_datareader', '...'
For existing databases, use sp_MSForEachDb
EXEC sp_MSForEachDb '
USE ?
CREATE USER ... FROM LOGIN...
EXEC sp_addrolemember ''db_datareader'', ''...''
'
qid & accept id:
(3579079, 3579462)
query:
How can you represent inheritance in a database?
soup:
@Bill Karwin describes three inheritance models in his SQL Antipatterns book, when proposing solutions to the SQL Entity-Attribute-Value antipattern. This is a brief overview:
\nSingle Table Inheritance (aka Table Per Hierarchy Inheritance):
\nUsing a single table as in your first option is probably the simplest design. As you mentioned, many attributes that are subtype-specific will have to be given a NULL value on rows where these attributes do not apply. With this model, you would have one policies table, which would look something like this:
\n+------+---------------------+----------+----------------+------------------+\n| id | date_issued | type | vehicle_reg_no | property_address |\n+------+---------------------+----------+----------------+------------------+\n| 1 | 2010-08-20 12:00:00 | MOTOR | 01-A-04004 | NULL |\n| 2 | 2010-08-20 13:00:00 | MOTOR | 02-B-01010 | NULL |\n| 3 | 2010-08-20 14:00:00 | PROPERTY | NULL | Oxford Street |\n| 4 | 2010-08-20 15:00:00 | MOTOR | 03-C-02020 | NULL |\n+------+---------------------+----------+----------------+------------------+\n\n\------ COMMON FIELDS -------/ \----- SUBTYPE SPECIFIC FIELDS -----/\n
\nKeeping the design simple is a plus, but the main problems with this approach are the following:
\n\nWhen it comes to adding new subtypes, you would have to alter the table to accommodate the attributes that describe these new objects. This can quickly become problematic when you have many subtypes, or if you plan to add subtypes on a regular basis.
\nThe database will not be able to enforce which attributes apply and which don't, since there is no metadata to define which attributes belong to which subtypes.
\nYou also cannot enforce NOT NULL on attributes of a subtype that should be mandatory. You would have to handle this in your application, which in general is not ideal.
\n
\nConcrete Table Inheritance:
\nAnother approach to tackle inheritance is to create a new table for each subtype, repeating all the common attributes in each table. For example:
\n--// Table: policies_motor\n+------+---------------------+----------------+\n| id | date_issued | vehicle_reg_no |\n+------+---------------------+----------------+\n| 1 | 2010-08-20 12:00:00 | 01-A-04004 |\n| 2 | 2010-08-20 13:00:00 | 02-B-01010 |\n| 3 | 2010-08-20 15:00:00 | 03-C-02020 |\n+------+---------------------+----------------+\n\n--// Table: policies_property \n+------+---------------------+------------------+\n| id | date_issued | property_address |\n+------+---------------------+------------------+\n| 1 | 2010-08-20 14:00:00 | Oxford Street | \n+------+---------------------+------------------+\n
\nThis design will basically solve the problems identified for the single table method:
\n\nMandatory attributes can now be enforced with NOT NULL.
\nAdding a new subtype requires adding a new table instead of adding columns to an existing one.
\nThere is also no risk that an inappropriate attribute is set for a particular subtype, such as the vehicle_reg_no field for a property policy.
\nThere is no need for the type attribute as in the single table method. The type is now defined by the metadata: the table name.
\n
\nHowever this model also comes with a few disadvantages:
\n\nThe common attributes are mixed with the subtype specific attributes, and there is no easy way to identify them. The database will not know either.
\nWhen defining the tables, you would have to repeat the common attributes for each subtype table. That's definitely not DRY.
\nSearching for all the policies regardless of the subtype becomes difficult, and would require a bunch of UNIONs.
\n
\nThis is how you would have to query all the policies regardless of the type:
\nSELECT date_issued, other_common_fields, 'MOTOR' AS type\nFROM policies_motor\nUNION ALL\nSELECT date_issued, other_common_fields, 'PROPERTY' AS type\nFROM policies_property;\n
\nNote how adding new subtypes would require the above query to be modified with an additional UNION ALL for each subtype. This can easily lead to bugs in your application if this operation is forgotten.
\nClass Table Inheritance (aka Table Per Type Inheritance):
\nThis is the solution that @David mentions in the other answer. You create a single table for your base class, which includes all the common attributes. Then you would create specific tables for each subtype, whose primary key also serves as a foreign key to the base table. Example:
\nCREATE TABLE policies (\n policy_id int,\n date_issued datetime,\n\n -- // other common attributes ...\n);\n\nCREATE TABLE policy_motor (\n policy_id int,\n vehicle_reg_no varchar(20),\n\n -- // other attributes specific to motor insurance ...\n\n FOREIGN KEY (policy_id) REFERENCES policies (policy_id)\n);\n\nCREATE TABLE policy_property (\n policy_id int,\n property_address varchar(20),\n\n -- // other attributes specific to property insurance ...\n\n FOREIGN KEY (policy_id) REFERENCES policies (policy_id)\n);\n
\nThis solution solves the problems identified in the other two designs:
\n\nMandatory attributes can be enforced with NOT NULL.
\nAdding a new subtype requires adding a new table instead of adding columns to an existing one.
\nNo risk that an inappropriate attribute is set for a particular subtype.
\nNo need for the type attribute.
\nNow the common attributes are not mixed with the subtype specific attributes anymore.
\nWe can stay DRY, finally. There is no need to repeat the common attributes for each subtype table when creating the tables.
\nManaging an auto incrementing id for the policies becomes easier, because this can be handled by the base table, instead of each subtype table generating them independently.
\nSearching for all the policies regardless of the subtype now becomes very easy: No UNIONs needed - just a SELECT * FROM policies.
\n
\nI consider the class table approach as the most suitable in most situations.
\n
\nThe names of these three models come from Martin Fowler's book Patterns of Enterprise Application Architecture.
\n
soup wrap:
@Bill Karwin describes three inheritance models in his SQL Antipatterns book, when proposing solutions to the SQL Entity-Attribute-Value antipattern. This is a brief overview:
Single Table Inheritance (aka Table Per Hierarchy Inheritance):
Using a single table as in your first option is probably the simplest design. As you mentioned, many attributes that are subtype-specific will have to be given a NULL value on rows where these attributes do not apply. With this model, you would have one policies table, which would look something like this:
+------+---------------------+----------+----------------+------------------+
| id | date_issued | type | vehicle_reg_no | property_address |
+------+---------------------+----------+----------------+------------------+
| 1 | 2010-08-20 12:00:00 | MOTOR | 01-A-04004 | NULL |
| 2 | 2010-08-20 13:00:00 | MOTOR | 02-B-01010 | NULL |
| 3 | 2010-08-20 14:00:00 | PROPERTY | NULL | Oxford Street |
| 4 | 2010-08-20 15:00:00 | MOTOR | 03-C-02020 | NULL |
+------+---------------------+----------+----------------+------------------+
\------ COMMON FIELDS -------/ \----- SUBTYPE SPECIFIC FIELDS -----/
Keeping the design simple is a plus, but the main problems with this approach are the following:
When it comes to adding new subtypes, you would have to alter the table to accommodate the attributes that describe these new objects. This can quickly become problematic when you have many subtypes, or if you plan to add subtypes on a regular basis.
The database will not be able to enforce which attributes apply and which don't, since there is no metadata to define which attributes belong to which subtypes.
You also cannot enforce NOT NULL on attributes of a subtype that should be mandatory. You would have to handle this in your application, which in general is not ideal.
Concrete Table Inheritance:
Another approach to tackle inheritance is to create a new table for each subtype, repeating all the common attributes in each table. For example:
--// Table: policies_motor
+------+---------------------+----------------+
| id | date_issued | vehicle_reg_no |
+------+---------------------+----------------+
| 1 | 2010-08-20 12:00:00 | 01-A-04004 |
| 2 | 2010-08-20 13:00:00 | 02-B-01010 |
| 3 | 2010-08-20 15:00:00 | 03-C-02020 |
+------+---------------------+----------------+
--// Table: policies_property
+------+---------------------+------------------+
| id | date_issued | property_address |
+------+---------------------+------------------+
| 1 | 2010-08-20 14:00:00 | Oxford Street |
+------+---------------------+------------------+
This design will basically solve the problems identified for the single table method:
Mandatory attributes can now be enforced with NOT NULL.
Adding a new subtype requires adding a new table instead of adding columns to an existing one.
There is also no risk that an inappropriate attribute is set for a particular subtype, such as the vehicle_reg_no field for a property policy.
There is no need for the type attribute as in the single table method. The type is now defined by the metadata: the table name.
However this model also comes with a few disadvantages:
The common attributes are mixed with the subtype specific attributes, and there is no easy way to identify them. The database will not know either.
When defining the tables, you would have to repeat the common attributes for each subtype table. That's definitely not DRY.
Searching for all the policies regardless of the subtype becomes difficult, and would require a bunch of UNIONs.
This is how you would have to query all the policies regardless of the type:
SELECT date_issued, other_common_fields, 'MOTOR' AS type
FROM policies_motor
UNION ALL
SELECT date_issued, other_common_fields, 'PROPERTY' AS type
FROM policies_property;
Note how adding new subtypes would require the above query to be modified with an additional UNION ALL for each subtype. This can easily lead to bugs in your application if this operation is forgotten.
Class Table Inheritance (aka Table Per Type Inheritance):
This is the solution that @David mentions in the other answer. You create a single table for your base class, which includes all the common attributes. Then you would create specific tables for each subtype, whose primary key also serves as a foreign key to the base table. Example:
CREATE TABLE policies (
policy_id int,
date_issued datetime,
-- // other common attributes ...
);
CREATE TABLE policy_motor (
policy_id int,
vehicle_reg_no varchar(20),
-- // other attributes specific to motor insurance ...
FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);
CREATE TABLE policy_property (
policy_id int,
property_address varchar(20),
-- // other attributes specific to property insurance ...
FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);
This solution solves the problems identified in the other two designs:
Mandatory attributes can be enforced with NOT NULL.
Adding a new subtype requires adding a new table instead of adding columns to an existing one.
No risk that an inappropriate attribute is set for a particular subtype.
No need for the type attribute.
Now the common attributes are not mixed with the subtype specific attributes anymore.
We can stay DRY, finally. There is no need to repeat the common attributes for each subtype table when creating the tables.
Managing an auto incrementing id for the policies becomes easier, because this can be handled by the base table, instead of each subtype table generating them independently.
Searching for all the policies regardless of the subtype now becomes very easy: No UNIONs needed - just a SELECT * FROM policies.
I consider the class table approach as the most suitable in most situations.
The names of these three models come from Martin Fowler's book Patterns of Enterprise Application Architecture.
qid & accept id:
(3589286, 3589298)
query:
Simple MySql - Get Largest Number in Table
soup:
Two options - using LIMIT:
\n SELECT yt.numeric_column\n FROM YOUR_TABLE yt\nORDER BY yt.numeric_column DESC\n LIMIT 1\n
\nUsing MAX:
\nSELECT MAX(yt.numeric_column)\n FROM YOUR_TABLE yt\n
\n
soup wrap:
Two options - using LIMIT:
SELECT yt.numeric_column
FROM YOUR_TABLE yt
ORDER BY yt.numeric_column DESC
LIMIT 1
Using MAX:
SELECT MAX(yt.numeric_column)
FROM YOUR_TABLE yt
qid & accept id:
(3609687, 3609741)
query:
Iterating through dates in SQL
soup:
Try this:
\nSelect DateAdd(day, 0, DateDiff(day, 0, StartDate)) Date,\n Name, Sum (Work) TotalWork\nFrom TableData\nGroup By Name, DateAdd(day, 0, DateDiff(day, 0, StartDate)) \n
\nTo get the missing days is harder.
\n Declare @SD DateTime, @ED DateTime -- StartDate and EndDate variables\n Select @SD = DateAdd(day, 0, DateDiff(day, 0, Min(StartDate))),\n @ED = DateAdd(day, 0, DateDiff(day, 0, Max(StartDate)))\n From TableData\n Declare @Ds Table (aDate SmallDateTime)\n While @SD <= @ED Begin \n Insert @Ds(aDate ) Values @SD\n Set @SD = @SD + 1\n End \n-- ----------------------------------------------------\n Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,\n td.Name, Sum (td.Work) TotalWork\n From @Ds ds Left Join TableData td\n On DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) = ds.aDate \n Group By Name, DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) \n
\nEDIT, I am revisiting this with a solution that uses a Common Table Expression (CTE). This does NOT require use of a dates table.
\n Declare @SD DateTime, @ED DateTime\n Declare @count integer = datediff(day, @SD, @ED)\n With Ints(i) As\n (Select 0 Union All\n Select i + 1 From Ints\n Where i < @count ) \n Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,\n td.Name, Sum (td.Work) TotalWork\n From Ints i \n Left Join TableData d\n On DateDiff(day, @SD, d.StartDate) = i.i\n Group By d.Name, DateAdd(day, 0, DateDiff(day, 0, d.StartDate)) \n
\n
soup wrap:
Try this:
Select DateAdd(day, 0, DateDiff(day, 0, StartDate)) Date,
Name, Sum (Work) TotalWork
From TableData
Group By Name, DateAdd(day, 0, DateDiff(day, 0, StartDate))
To get the missing days is harder.
Declare @SD DateTime, @ED DateTime -- StartDate and EndDate variables
Select @SD = DateAdd(day, 0, DateDiff(day, 0, Min(StartDate))),
@ED = DateAdd(day, 0, DateDiff(day, 0, Max(StartDate)))
From TableData
Declare @Ds Table (aDate SmallDateTime)
While @SD <= @ED Begin
Insert @Ds(aDate ) Values @SD
Set @SD = @SD + 1
End
-- ----------------------------------------------------
Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,
td.Name, Sum (td.Work) TotalWork
From @Ds ds Left Join TableData td
On DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) = ds.aDate
Group By Name, DateAdd(day, 0, DateDiff(day, 0, tD.StartDate))
EDIT, I am revisiting this with a solution that uses a Common Table Expression (CTE). This does NOT require use of a dates table.
Declare @SD DateTime, @ED DateTime
Declare @count integer = datediff(day, @SD, @ED)
With Ints(i) As
(Select 0 Union All
Select i + 1 From Ints
Where i < @count )
Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,
td.Name, Sum (td.Work) TotalWork
From Ints i
Left Join TableData d
On DateDiff(day, @SD, d.StartDate) = i.i
Group By d.Name, DateAdd(day, 0, DateDiff(day, 0, d.StartDate))
qid & accept id:
(3623645, 3624616)
query:
How to repair a corrupted MPTT tree (nested set) in the database using SQL?
soup:
Using SQL Server, following script seems to be working for me.
\nOutput testscript
\ncategory_id name parent lft rgt lftcalc rgtcalc\n----------- -------------------- ----------- ----------- ----------- ----------- -----------\n1 ELECTRONICS NULL 1 20 1 20\n2 TELEVISIONS 1 2 9 2 9\n3 TUBE 2 3 4 3 4\n4 LCD 2 5 6 5 6\n5 PLASMA 2 7 8 7 8\n6 PORTABLE ELECTRONICS 1 10 19 10 19\n7 MP3 PLAYERS 6 11 14 11 14\n8 FLASH 7 12 13 12 13\n9 CD PLAYERS 6 15 16 15 16\n10 2 WAY RADIOS 6 17 18 17 18\n
\nScript
\nSET NOCOUNT ON\nGO\n\nDECLARE @nested_category TABLE (\n category_id INT PRIMARY KEY,\n name VARCHAR(20) NOT NULL,\n parent INT,\n lft INT,\n rgt INT\n);\n\nDECLARE @current_Category_ID INTEGER\nDECLARE @current_parent INTEGER\nDECLARE @SafeGuard INTEGER\nDECLARE @myLeft INTEGER\nSET @SafeGuard = 100\n\nINSERT INTO @nested_category \nSELECT 1,'ELECTRONICS',NULL,NULL,NULL\nUNION ALL SELECT 2,'TELEVISIONS',1,NULL,NULL\nUNION ALL SELECT 3,'TUBE',2,NULL,NULL\nUNION ALL SELECT 4,'LCD',2,NULL,NULL\nUNION ALL SELECT 5,'PLASMA',2,NULL,NULL\nUNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,NULL,NULL\nUNION ALL SELECT 7,'MP3 PLAYERS',6,NULL,NULL\nUNION ALL SELECT 8,'FLASH',7,NULL,NULL\nUNION ALL SELECT 9,'CD PLAYERS',6,NULL,NULL\nUNION ALL SELECT 10,'2 WAY RADIOS',6,NULL,NULL\n\n/* Initialize */\nUPDATE @nested_category \nSET lft = 1\n , rgt = 2\nWHERE parent IS NULL\n\nUPDATE @nested_category \nSET lft = NULL\n , rgt = NULL\nWHERE parent IS NOT NULL\n\nWHILE EXISTS (SELECT * FROM @nested_category WHERE lft IS NULL) AND @SafeGuard > 0\nBEGIN\n SELECT @current_Category_ID = MAX(nc.category_id)\n FROM @nested_category nc\n INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent\n WHERE nc.lft IS NULL\n AND nc2.lft IS NOT NULL\n\n SELECT @current_parent = parent\n FROM @nested_category\n WHERE category_id = @current_category_id\n\n SELECT @myLeft = lft\n FROM @nested_category\n WHERE category_id = @current_parent\n\n UPDATE @nested_category SET rgt = rgt + 2 WHERE rgt > @myLeft;\n UPDATE @nested_category SET lft = lft + 2 WHERE lft > @myLeft;\n UPDATE @nested_category SET lft = @myLeft + 1, rgt = @myLeft + 2 WHERE category_id = @current_category_id\n\n SET @SafeGuard = @SafeGuard - 1\nEND\n\nSELECT * FROM @nested_category ORDER BY category_id\n\nSELECT COUNT(node.name), node.name, MIN(node.lft)\nFROM @nested_category AS node,\n @nested_category AS parent\nWHERE node.lft BETWEEN parent.lft AND parent.rgt\nGROUP BY \n node.name\nORDER BY\n 3, 1\n
\nTestscript ##
\nSET NOCOUNT ON\nGO\n\nDECLARE @nested_category TABLE (\n category_id INT PRIMARY KEY,\n name VARCHAR(20) NOT NULL,\n parent INT,\n lft INT,\n rgt INT, \n lftcalc INT,\n rgtcalc INT\n);\n\nINSERT INTO @nested_category \nSELECT 1,'ELECTRONICS',NULL,1,20,NULL,NULL\nUNION ALL SELECT 2,'TELEVISIONS',1,2,9,NULL,NULL\nUNION ALL SELECT 3,'TUBE',2,3,4,NULL,NULL\nUNION ALL SELECT 4,'LCD',2,5,6,NULL,NULL\nUNION ALL SELECT 5,'PLASMA',2,7,8,NULL,NULL\nUNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,10,19,NULL,NULL\nUNION ALL SELECT 7,'MP3 PLAYERS',6,11,14,NULL,NULL\nUNION ALL SELECT 8,'FLASH',7,12,13,NULL,NULL\nUNION ALL SELECT 9,'CD PLAYERS',6,15,16,NULL,NULL\nUNION ALL SELECT 10,'2 WAY RADIOS',6,17,18,NULL,NULL\n\n/* Initialize */\nUPDATE @nested_category \nSET lftcalc = 1\n , rgtcalc = 2\nWHERE parent IS NULL\n\nDECLARE @current_Category_ID INTEGER\nDECLARE @current_parent INTEGER\nDECLARE @SafeGuard INTEGER\nDECLARE @myRight INTEGER\nDECLARE @myLeft INTEGER\nSET @SafeGuard = 100\nWHILE EXISTS (SELECT * FROM @nested_category WHERE lftcalc IS NULL) AND @SafeGuard > 0\nBEGIN\n SELECT @current_Category_ID = MAX(nc.category_id)\n FROM @nested_category nc\n INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent\n WHERE nc.lftcalc IS NULL\n AND nc2.lftcalc IS NOT NULL\n\n SELECT @current_parent = parent\n FROM @nested_category\n WHERE category_id = @current_category_id\n\n SELECT @myLeft = lftcalc\n FROM @nested_category\n WHERE category_id = @current_parent\n\n UPDATE @nested_category SET rgtcalc = rgtcalc + 2 WHERE rgtcalc > @myLeft;\n UPDATE @nested_category SET lftcalc = lftcalc + 2 WHERE lftcalc > @myLeft;\n UPDATE @nested_category SET lftcalc = @myLeft + 1, rgtcalc = @myLeft + 2 WHERE category_id = @current_category_id\n\n SELECT * FROM @nested_category WHERE category_id = @current_parent\n SELECT * FROM @nested_category ORDER BY category_id\n SET @SafeGuard = @SafeGuard - 1\nEND\n\nSELECT * FROM @nested_category ORDER BY category_id\n\nSELECT COUNT(node.name), node.name, MIN(node.lft)\nFROM @nested_category AS node,\n @nested_category AS parent\nWHERE node.lft BETWEEN parent.lft AND parent.rgt\nGROUP BY \n node.name\nORDER BY\n 3, 1\n
\n
soup wrap:
Using SQL Server, following script seems to be working for me.
Output testscript
category_id name parent lft rgt lftcalc rgtcalc
----------- -------------------- ----------- ----------- ----------- ----------- -----------
1 ELECTRONICS NULL 1 20 1 20
2 TELEVISIONS 1 2 9 2 9
3 TUBE 2 3 4 3 4
4 LCD 2 5 6 5 6
5 PLASMA 2 7 8 7 8
6 PORTABLE ELECTRONICS 1 10 19 10 19
7 MP3 PLAYERS 6 11 14 11 14
8 FLASH 7 12 13 12 13
9 CD PLAYERS 6 15 16 15 16
10 2 WAY RADIOS 6 17 18 17 18
Script
SET NOCOUNT ON
GO
DECLARE @nested_category TABLE (
category_id INT PRIMARY KEY,
name VARCHAR(20) NOT NULL,
parent INT,
lft INT,
rgt INT
);
DECLARE @current_Category_ID INTEGER
DECLARE @current_parent INTEGER
DECLARE @SafeGuard INTEGER
DECLARE @myLeft INTEGER
SET @SafeGuard = 100
INSERT INTO @nested_category
SELECT 1,'ELECTRONICS',NULL,NULL,NULL
UNION ALL SELECT 2,'TELEVISIONS',1,NULL,NULL
UNION ALL SELECT 3,'TUBE',2,NULL,NULL
UNION ALL SELECT 4,'LCD',2,NULL,NULL
UNION ALL SELECT 5,'PLASMA',2,NULL,NULL
UNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,NULL,NULL
UNION ALL SELECT 7,'MP3 PLAYERS',6,NULL,NULL
UNION ALL SELECT 8,'FLASH',7,NULL,NULL
UNION ALL SELECT 9,'CD PLAYERS',6,NULL,NULL
UNION ALL SELECT 10,'2 WAY RADIOS',6,NULL,NULL
/* Initialize */
UPDATE @nested_category
SET lft = 1
, rgt = 2
WHERE parent IS NULL
UPDATE @nested_category
SET lft = NULL
, rgt = NULL
WHERE parent IS NOT NULL
WHILE EXISTS (SELECT * FROM @nested_category WHERE lft IS NULL) AND @SafeGuard > 0
BEGIN
SELECT @current_Category_ID = MAX(nc.category_id)
FROM @nested_category nc
INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent
WHERE nc.lft IS NULL
AND nc2.lft IS NOT NULL
SELECT @current_parent = parent
FROM @nested_category
WHERE category_id = @current_category_id
SELECT @myLeft = lft
FROM @nested_category
WHERE category_id = @current_parent
UPDATE @nested_category SET rgt = rgt + 2 WHERE rgt > @myLeft;
UPDATE @nested_category SET lft = lft + 2 WHERE lft > @myLeft;
UPDATE @nested_category SET lft = @myLeft + 1, rgt = @myLeft + 2 WHERE category_id = @current_category_id
SET @SafeGuard = @SafeGuard - 1
END
SELECT * FROM @nested_category ORDER BY category_id
SELECT COUNT(node.name), node.name, MIN(node.lft)
FROM @nested_category AS node,
@nested_category AS parent
WHERE node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY
node.name
ORDER BY
3, 1
Testscript ##
SET NOCOUNT ON
GO
DECLARE @nested_category TABLE (
category_id INT PRIMARY KEY,
name VARCHAR(20) NOT NULL,
parent INT,
lft INT,
rgt INT,
lftcalc INT,
rgtcalc INT
);
INSERT INTO @nested_category
SELECT 1,'ELECTRONICS',NULL,1,20,NULL,NULL
UNION ALL SELECT 2,'TELEVISIONS',1,2,9,NULL,NULL
UNION ALL SELECT 3,'TUBE',2,3,4,NULL,NULL
UNION ALL SELECT 4,'LCD',2,5,6,NULL,NULL
UNION ALL SELECT 5,'PLASMA',2,7,8,NULL,NULL
UNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,10,19,NULL,NULL
UNION ALL SELECT 7,'MP3 PLAYERS',6,11,14,NULL,NULL
UNION ALL SELECT 8,'FLASH',7,12,13,NULL,NULL
UNION ALL SELECT 9,'CD PLAYERS',6,15,16,NULL,NULL
UNION ALL SELECT 10,'2 WAY RADIOS',6,17,18,NULL,NULL
/* Initialize */
UPDATE @nested_category
SET lftcalc = 1
, rgtcalc = 2
WHERE parent IS NULL
DECLARE @current_Category_ID INTEGER
DECLARE @current_parent INTEGER
DECLARE @SafeGuard INTEGER
DECLARE @myRight INTEGER
DECLARE @myLeft INTEGER
SET @SafeGuard = 100
WHILE EXISTS (SELECT * FROM @nested_category WHERE lftcalc IS NULL) AND @SafeGuard > 0
BEGIN
SELECT @current_Category_ID = MAX(nc.category_id)
FROM @nested_category nc
INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent
WHERE nc.lftcalc IS NULL
AND nc2.lftcalc IS NOT NULL
SELECT @current_parent = parent
FROM @nested_category
WHERE category_id = @current_category_id
SELECT @myLeft = lftcalc
FROM @nested_category
WHERE category_id = @current_parent
UPDATE @nested_category SET rgtcalc = rgtcalc + 2 WHERE rgtcalc > @myLeft;
UPDATE @nested_category SET lftcalc = lftcalc + 2 WHERE lftcalc > @myLeft;
UPDATE @nested_category SET lftcalc = @myLeft + 1, rgtcalc = @myLeft + 2 WHERE category_id = @current_category_id
SELECT * FROM @nested_category WHERE category_id = @current_parent
SELECT * FROM @nested_category ORDER BY category_id
SET @SafeGuard = @SafeGuard - 1
END
SELECT * FROM @nested_category ORDER BY category_id
SELECT COUNT(node.name), node.name, MIN(node.lft)
FROM @nested_category AS node,
@nested_category AS parent
WHERE node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY
node.name
ORDER BY
3, 1
qid & accept id:
(3675616, 3675746)
query:
Skipping rows in sql query (finding end date based on start date and worked days)
soup:
You could have a where clause that says there must be N working days between the start and the end day. Unlike the row_number() variants, this should work in MS Access. For example:
\ndeclare @Task table (taskid int, empid int, start date, days int)\ninsert @Task values (1, 1, '2010-01-01', 1)\ninsert @Task values (2, 1, '2010-01-01', 2)\ninsert @Task values (3, 1, '2010-01-01', 3)\n\ndeclare @WorkableDays table (empid int, day date)\ninsert @WorkableDays values (1, '2010-01-01')\ninsert @WorkableDays values (1, '2010-01-02')\ninsert @WorkableDays values (1, '2010-01-05')\n\nselect t.taskid\n, t.start\n, endday.day as end\nfrom @Task t\njoin @WorkableDays endday\non endday.empid = t.empid\nwhere t.days = \n (\n select COUNT(*)\n from @WorkableDays wd\n where wd.empId = t.empId\n and wd.day between t.start and endday.day\n )\n
\nThis prints:
\ntaskid start end\n1 2010-01-01 2010-01-01\n2 2010-01-01 2010-01-02\n3 2010-01-01 2010-01-05\n
\n
soup wrap:
You could have a where clause that says there must be N working days between the start and the end day. Unlike the row_number() variants, this should work in MS Access. For example:
declare @Task table (taskid int, empid int, start date, days int)
insert @Task values (1, 1, '2010-01-01', 1)
insert @Task values (2, 1, '2010-01-01', 2)
insert @Task values (3, 1, '2010-01-01', 3)
declare @WorkableDays table (empid int, day date)
insert @WorkableDays values (1, '2010-01-01')
insert @WorkableDays values (1, '2010-01-02')
insert @WorkableDays values (1, '2010-01-05')
select t.taskid
, t.start
, endday.day as end
from @Task t
join @WorkableDays endday
on endday.empid = t.empid
where t.days =
(
select COUNT(*)
from @WorkableDays wd
where wd.empId = t.empId
and wd.day between t.start and endday.day
)
This prints:
taskid start end
1 2010-01-01 2010-01-01
2 2010-01-01 2010-01-02
3 2010-01-01 2010-01-05
qid & accept id:
(3702873, 3705188)
query:
MySQL: How to select the UTC offset and DST for all timezones?
soup:
Try this query. The offsettime is the (Offset / 60 / 60)
\nSELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`\nFROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`\nWHERE transition.`Time_zone_id`=tzname.`Time_zone_id`\nORDER BY transition.`Offset` ASC;\n
\nThe results are
\n501 -12.00000000 0 0 PHOT Pacific/Enderbury\n369 -12.00000000 0 0 GMT+12 Etc/GMT+12\n513 -12.00000000 0 1 KWAT Pacific/Kwajalein\n483 -12.00000000 0 1 KWAT Kwajalein\n518 -11.50000000 0 1 NUT Pacific/Niue\n496 -11.50000000 0 1 SAMT Pacific/Apia\n528 -11.50000000 0 1 SAMT Pacific/Samoa\n555 -11.50000000 0 1 SAMT US/Samoa\n521 -11.50000000 0 1 SAMT Pacific/Pago_Pago\n496 -11.44888889 0 0 LMT Pacific/Apia\n528 -11.38000000 0 0 LMT Pacific/Samoa\n555 -11.38000000 0 0 LMT US/Samoa\n521 -11.38000000 0 0 LMT Pacific/Pago_Pago\n518 -11.33333333 0 0 NUT Pacific/Niue\n544 -11.00000000 0 3 BST US/Aleutian\n163 -11.00000000 0 3 BST America/Nome\n518 -11.00000000 0 2 NUT Pacific/Niue\n496 -11.00000000 0 2 WST Pacific/Apia\n544 -11.00000000 0 0 NST US/Aleutian\n163 -11.00000000 0 0 NST America/Nome\n528 -11.00000000 0 4 SST Pacific/Samoa\n528 -11.00000000 0 3 BST Pacific/Samoa\n
\n
soup wrap:
Try this query. The offsettime is the (Offset / 60 / 60)
SELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`
FROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`
WHERE transition.`Time_zone_id`=tzname.`Time_zone_id`
ORDER BY transition.`Offset` ASC;
The results are
501 -12.00000000 0 0 PHOT Pacific/Enderbury
369 -12.00000000 0 0 GMT+12 Etc/GMT+12
513 -12.00000000 0 1 KWAT Pacific/Kwajalein
483 -12.00000000 0 1 KWAT Kwajalein
518 -11.50000000 0 1 NUT Pacific/Niue
496 -11.50000000 0 1 SAMT Pacific/Apia
528 -11.50000000 0 1 SAMT Pacific/Samoa
555 -11.50000000 0 1 SAMT US/Samoa
521 -11.50000000 0 1 SAMT Pacific/Pago_Pago
496 -11.44888889 0 0 LMT Pacific/Apia
528 -11.38000000 0 0 LMT Pacific/Samoa
555 -11.38000000 0 0 LMT US/Samoa
521 -11.38000000 0 0 LMT Pacific/Pago_Pago
518 -11.33333333 0 0 NUT Pacific/Niue
544 -11.00000000 0 3 BST US/Aleutian
163 -11.00000000 0 3 BST America/Nome
518 -11.00000000 0 2 NUT Pacific/Niue
496 -11.00000000 0 2 WST Pacific/Apia
544 -11.00000000 0 0 NST US/Aleutian
163 -11.00000000 0 0 NST America/Nome
528 -11.00000000 0 4 SST Pacific/Samoa
528 -11.00000000 0 3 BST Pacific/Samoa
qid & accept id:
(3805664, 3805706)
query:
Sort out the three first occurence of an attribute
soup:
To get events for the next three non-sequential days, starting today, use:
\nSELECT x.*\n FROM (SELECT ep.*,\n CASE\n WHEN DATE(@dt) = DATE(x.dt) THEN @rownum\n ELSE @rownum := @rownum + 1\n END AS rank,\n FROM EVENT_POST ep\n JOIN (SELECT @rowrum := 0, @dt := NULL) r\n WHERE ep.startdate >= CURRENT_DATE\n ORDER BY t.startdate, t.starttime) x\n WHERE x.rank <= 3\n
\nTo get events for the next three sequential days, starting today, use the DATE_ADD function:
\nSELECT ep.*\n FROM EVENT_POST ep\n WHERE ep.startdate BETWEEN DATE(NOW)\n AND DATE_ADD(DATE(NOW), INTERVAL 3 DAY)\n
\n
soup wrap:
To get events for the next three non-sequential days, starting today, use:
SELECT x.*
FROM (SELECT ep.*,
CASE
WHEN DATE(@dt) = DATE(x.dt) THEN @rownum
ELSE @rownum := @rownum + 1
END AS rank,
FROM EVENT_POST ep
JOIN (SELECT @rowrum := 0, @dt := NULL) r
WHERE ep.startdate >= CURRENT_DATE
ORDER BY t.startdate, t.starttime) x
WHERE x.rank <= 3
To get events for the next three sequential days, starting today, use the DATE_ADD function:
SELECT ep.*
FROM EVENT_POST ep
WHERE ep.startdate BETWEEN DATE(NOW)
AND DATE_ADD(DATE(NOW), INTERVAL 3 DAY)
qid & accept id:
(3819810, 3819953)
query:
Normalizing a table: finding unique columns over series of rows (Oracle 10.x)
soup:
Since 10 tables is not a lot, here is (some sort of) pseudo code
\nfor each table_name in tables\n for each column_name in columns\n case (exists (select 1\n from table_name\n group by PersonID\n having min(column_name) = max(column_name))\n when true then 'Worker'\n when false then 'Person'\n end case\n end for\nend for\n
\nwith information schema and dynamic queries you could make the above proper PL/SQL or take the core query and script it in your favourite language.
\nEDIT:\nThe above assumes no NULLs in column_name.
\nEDIT2:\nOther variants of the core query can be
\nSELECT 1\nFROM \n(SELECT COUNT(DISTINCT column_name) AS distinct_values_by_pid\nFROM table_name\nGROUP BY PersonID) T\nHAVING MIN(distinct_values_by_pid) = MAX(distinct_values_by_pid)\n
\nWhich will return a row if all values per PersonID are the same.\n(this query also has problems with NULLS, but I consider NULLs a separate issue; you can always cast a NULL to some out-of-domain value for purposes of the above query)
\nThe above query can be also written as
\nSELECT MIN(c1)=MAX(c1), MIN(c2)=MAX(c2), ...\nFROM \n(SELECT COUNT(DISTINCT column_name_1) AS c1, COUNT(DISTINCT column_name_2) AS c2, ...\nFROM table_name\nGROUP BY PersonID) T\n
\nWhich will test multiple columns at the same time returning true for columns that belong to 'Workers' and false for columns that should go into 'Persons'.
\n
soup wrap:
Since 10 tables is not a lot, here is (some sort of) pseudo code
for each table_name in tables
for each column_name in columns
case (exists (select 1
from table_name
group by PersonID
having min(column_name) = max(column_name))
when true then 'Worker'
when false then 'Person'
end case
end for
end for
with information schema and dynamic queries you could make the above proper PL/SQL or take the core query and script it in your favourite language.
EDIT:
The above assumes no NULLs in column_name.
EDIT2:
Other variants of the core query can be
SELECT 1
FROM
(SELECT COUNT(DISTINCT column_name) AS distinct_values_by_pid
FROM table_name
GROUP BY PersonID) T
HAVING MIN(distinct_values_by_pid) = MAX(distinct_values_by_pid)
Which will return a row if all values per PersonID are the same.
(this query also has problems with NULLS, but I consider NULLs a separate issue; you can always cast a NULL to some out-of-domain value for purposes of the above query)
The above query can be also written as
SELECT MIN(c1)=MAX(c1), MIN(c2)=MAX(c2), ...
FROM
(SELECT COUNT(DISTINCT column_name_1) AS c1, COUNT(DISTINCT column_name_2) AS c2, ...
FROM table_name
GROUP BY PersonID) T
Which will test multiple columns at the same time returning true for columns that belong to 'Workers' and false for columns that should go into 'Persons'.
qid & accept id:
(3821642, 3822300)
query:
Parse SQL file to separate columns
soup:
What about when there are three e-mails/names?\nWith shown data it should be easy to do
\nselect replace(substring(substring_index(`Personnel`, ',', 1),length(substring_index(`Personnel`, ',', 1 - 1)) + 1), ',', '') personnel1,\n replace(substring(substring_index(`Personnel`, ',', 2),length(substring_index(`Personnel`, ',', 2 - 1)) + 1), ',', '') personnel2,\nfrom `pubs_for_client`\n
\nThe above will split the Personnel column on delimiter ,.
\nYou can then split these fields on delimiter ( and ) to split personnel into name, position and e-mail
\nThe SQL will be ugly (because mysql does not have split function), but it will get the job done.
\nThe split expression was taken from comments on mysql documentation (search for split).
\nYou can also
\nCREATE FUNCTION strSplit(x varchar(255), delim varchar(12), pos int) returns varchar(255)\nreturn replace(substring(substring_index(x, delim, pos), length(substring_index(x, delim, pos - 1)) + 1), delim, '');\n
\nAfter which you can user
\nselect strSplit(`Personnel`, ',', 1), strSplit(`Personnel`, ',', 2)\nfrom `pubs_for_client`\n
\nYou could also create your own function that will extract directly names and e-mails.
\n
soup wrap:
What about when there are three e-mails/names?
With shown data it should be easy to do
select replace(substring(substring_index(`Personnel`, ',', 1),length(substring_index(`Personnel`, ',', 1 - 1)) + 1), ',', '') personnel1,
replace(substring(substring_index(`Personnel`, ',', 2),length(substring_index(`Personnel`, ',', 2 - 1)) + 1), ',', '') personnel2,
from `pubs_for_client`
The above will split the Personnel column on delimiter ,.
You can then split these fields on delimiter ( and ) to split personnel into name, position and e-mail
The SQL will be ugly (because mysql does not have split function), but it will get the job done.
The split expression was taken from comments on mysql documentation (search for split).
You can also
CREATE FUNCTION strSplit(x varchar(255), delim varchar(12), pos int) returns varchar(255)
return replace(substring(substring_index(x, delim, pos), length(substring_index(x, delim, pos - 1)) + 1), delim, '');
After which you can user
select strSplit(`Personnel`, ',', 1), strSplit(`Personnel`, ',', 2)
from `pubs_for_client`
You could also create your own function that will extract directly names and e-mails.
qid & accept id:
(3827025, 3827089)
query:
Matching delimited string to table rows
soup:
Short Term Solution
\nFor your immediate problem, the FIND_IN_SET function is what you want to use for joining:
\nFor People
\nSELECT p.*\n FROM PEOPLE p\n JOIN HOUSES h ON FIND_IN_SET(p.name, h.people)\n WHERE h.name = ?\n
\nFor Houses
\nSELECT h.*\n FROM HOUSES h\n JOIN PEOPLE p ON FIND_IN_SET(h.name, p.houses)\n WHERE p.name = ?\n
\nLong Term Solution
\nIs to properly model this by adding a table to link houses to people, because you're likely storing redundant relationships in both tables:
\nCREATE TABLE people_houses (\n house_id int,\n person_id int,\n PRIMARY KEY (house_id, person_id),\n FOREIGN KEY (house_id) REFERENCES houses (id),\n FOREIGN KEY (person_id) REFERENCES people (id)\n)\n
\n
soup wrap:
Short Term Solution
For your immediate problem, the FIND_IN_SET function is what you want to use for joining:
For People
SELECT p.*
FROM PEOPLE p
JOIN HOUSES h ON FIND_IN_SET(p.name, h.people)
WHERE h.name = ?
For Houses
SELECT h.*
FROM HOUSES h
JOIN PEOPLE p ON FIND_IN_SET(h.name, p.houses)
WHERE p.name = ?
Long Term Solution
Is to properly model this by adding a table to link houses to people, because you're likely storing redundant relationships in both tables:
CREATE TABLE people_houses (
house_id int,
person_id int,
PRIMARY KEY (house_id, person_id),
FOREIGN KEY (house_id) REFERENCES houses (id),
FOREIGN KEY (person_id) REFERENCES people (id)
)
qid & accept id:
(3886340, 3886391)
query:
SQL Select Return Default Value If Null
soup:
Two things:
\n\n- Use
left outer join instead of inner join to get all the listings, even with missing pictures. \nUse coalesce to apply the default
\nSELECT Listing.Title\n , Listing.MLS\n , Pictures.PictureTH\n , coalesce(Pictures.Picture, 'default.jpg') as Picture\n , Listing.ID \nFROM Listing \nLEFT OUTER JOIN Pictures \n ON Listing.ID = Pictures.ListingID \n
\n
\nEDIT To limit to one row:
\nSELECT Listing.Title\n , Listing.MLS\n , Pictures.PictureTH\n , coalesce(Pictures.Picture, 'default.jpg') as Picture\n , Listing.ID \nFROM Listing \nLEFT OUTER JOIN Pictures \n ON Listing.ID = Pictures.ListingID \nWHERE Pictures.ID is null\nOR Pictures.ID = (SELECT MIN(ID) \n FROM Pictures \n WHERE (ListingID = Listing.ID))) \n
\n
soup wrap:
Two things:
- Use
left outer join instead of inner join to get all the listings, even with missing pictures.
Use coalesce to apply the default
SELECT Listing.Title
, Listing.MLS
, Pictures.PictureTH
, coalesce(Pictures.Picture, 'default.jpg') as Picture
, Listing.ID
FROM Listing
LEFT OUTER JOIN Pictures
ON Listing.ID = Pictures.ListingID
EDIT To limit to one row:
SELECT Listing.Title
, Listing.MLS
, Pictures.PictureTH
, coalesce(Pictures.Picture, 'default.jpg') as Picture
, Listing.ID
FROM Listing
LEFT OUTER JOIN Pictures
ON Listing.ID = Pictures.ListingID
WHERE Pictures.ID is null
OR Pictures.ID = (SELECT MIN(ID)
FROM Pictures
WHERE (ListingID = Listing.ID)))
qid & accept id:
(3891758, 3892432)
query:
How to update one table from another one without specifying column names?
soup:
Not sure if you'll be able to accomplish this without using dynamic sql to build out the update statement in a variable.
\nThis statement will return a list of columns based on the table name you put in:
\nselect name from syscolumns\nwhere [id] = (select [id] from sysobjects where name = 'tablename')\n
\nNot sure if I can avoid a loop here....you'll need to load the results from above into a cursor and then build a query from it. Psuedo coded:
\nset @query = 'update [1607348182] set '\nload cursor --(we will use @name to hold the column name)\nwhile stillrecordsincursor\nset @query = @query + @name + ' = tmp_[1607348182]. ' +@name + ','\nload next value from cursor\nloop!\n
\nWhen the query is done being built in the loop, use exec sp_executesql @query.
\nJust a little warning...building dynamic sql in a loop like this can get a bit confusing. For trouble shooting, putting a select @query in the loop and watch the @query get built.
\nedit:\nNot sure if you'll be able to do all 1000 rows in an update at once...there are logical limits (varchar(8000)?) on the size that @query can grow too. You may have to divide the code so it handles 50 columns at a time. Put the columns from the syscolumns select statement into a temp table with an id and build your dynamic sql so it updates 20 columns (or 50?) at a time.
\nAnother alternative would be to use excel to mass build this. Do the column select and copy the results into column a of a spreadsheet. Put '= in column b, tmp.[12331312] in column c, copy column a into column D, and a comma into column e. Copy the entire spreadsheet into a notepad, and you should have the columns of the update statement built out for you. Not a bad solution if this is a one shot event, not sure if I'd rely on this as a on-going solution.
\n
soup wrap:
Not sure if you'll be able to accomplish this without using dynamic sql to build out the update statement in a variable.
This statement will return a list of columns based on the table name you put in:
select name from syscolumns
where [id] = (select [id] from sysobjects where name = 'tablename')
Not sure if I can avoid a loop here....you'll need to load the results from above into a cursor and then build a query from it. Psuedo coded:
set @query = 'update [1607348182] set '
load cursor --(we will use @name to hold the column name)
while stillrecordsincursor
set @query = @query + @name + ' = tmp_[1607348182]. ' +@name + ','
load next value from cursor
loop!
When the query is done being built in the loop, use exec sp_executesql @query.
Just a little warning...building dynamic sql in a loop like this can get a bit confusing. For trouble shooting, putting a select @query in the loop and watch the @query get built.
edit:
Not sure if you'll be able to do all 1000 rows in an update at once...there are logical limits (varchar(8000)?) on the size that @query can grow too. You may have to divide the code so it handles 50 columns at a time. Put the columns from the syscolumns select statement into a temp table with an id and build your dynamic sql so it updates 20 columns (or 50?) at a time.
Another alternative would be to use excel to mass build this. Do the column select and copy the results into column a of a spreadsheet. Put '= in column b, tmp.[12331312] in column c, copy column a into column D, and a comma into column e. Copy the entire spreadsheet into a notepad, and you should have the columns of the update statement built out for you. Not a bad solution if this is a one shot event, not sure if I'd rely on this as a on-going solution.
qid & accept id:
(3895652, 3895665)
query:
How to Truncate the Decimal Places without Rounding Up?
soup:
using the round function you can try this
\nselect round(4.584406, 1, 1)\n
\nthe output will be
\n4.5\n
\nthe key is the third parameter
\nROUND ( numeric_expression , length [ ,function ] )\n
\n\nfunction
\nIs the type of operation to perform. function must be tinyint,\n
\nsmallint, or int. When function is\n omitted or has a value of 0 (default),\n numeric_expression is rounded. When a\n value other than 0 is specified,\n numeric_expression is truncated.
\n
\n
soup wrap:
using the round function you can try this
select round(4.584406, 1, 1)
the output will be
4.5
the key is the third parameter
ROUND ( numeric_expression , length [ ,function ] )
function
Is the type of operation to perform. function must be tinyint,
smallint, or int. When function is
omitted or has a value of 0 (default),
numeric_expression is rounded. When a
value other than 0 is specified,
numeric_expression is truncated.
qid & accept id:
(3900330, 3900450)
query:
MySQL get only rows with a unique value for a certain field
soup:
select min(id) from \n(\n select id, senderID pID from table where receiverID = '1'\n union\n select id, receiverID pID from table where senderID = '1'\n) as fred\ngroup by pID;\n
\nFor your data set, this gives:
\n+---------+\n| min(id) |\n+---------+\n| 0 |\n| 1 |\n+---------+\n
\n
soup wrap:
select min(id) from
(
select id, senderID pID from table where receiverID = '1'
union
select id, receiverID pID from table where senderID = '1'
) as fred
group by pID;
For your data set, this gives:
+---------+
| min(id) |
+---------+
| 0 |
| 1 |
+---------+
qid & accept id:
(3925043, 3925608)
query:
Most optimized way to get column totals in SQL Server 2005+
soup:
Any reason this isn't done as
\nselect prg.prefix_id, count(1) from tablename where... group by prg.prefix_id \n
\nIt would leave you with a result set of the prefix_id and the count of rows for each prefix_ID...might be preferential over a series of count(case) statements, and I think it should be quicker, but I can't confirm for sure.
\nI would use a subquery before resorting to @vars myself. Something like this:
\n select c1,c2,c1+c1 as total from \n (SELECT \n count(case when prg.prefix_id = 1 then iss.id end) as c1, \n count(case when prg.prefix_id = 2 then iss.id end) as c2 \n FROM dbo.TableName \n WHERE ... ) a\n
\n
soup wrap:
Any reason this isn't done as
select prg.prefix_id, count(1) from tablename where... group by prg.prefix_id
It would leave you with a result set of the prefix_id and the count of rows for each prefix_ID...might be preferential over a series of count(case) statements, and I think it should be quicker, but I can't confirm for sure.
I would use a subquery before resorting to @vars myself. Something like this:
select c1,c2,c1+c1 as total from
(SELECT
count(case when prg.prefix_id = 1 then iss.id end) as c1,
count(case when prg.prefix_id = 2 then iss.id end) as c2
FROM dbo.TableName
WHERE ... ) a
qid & accept id:
(3932947, 3933001)
query:
SQL Server 2005: how to subtract 6 month
soup:
You can use DATEADD:
\nselect DATEADD(month, -6, @d)\n
\nEDIT: if you need the number of days up to 6 months ago you can use DATEDIFF:
\nselect DATEDIFF(day, @d, DATEADD(month, -6, @d))\n
\n
soup wrap:
You can use DATEADD:
select DATEADD(month, -6, @d)
EDIT: if you need the number of days up to 6 months ago you can use DATEDIFF:
select DATEDIFF(day, @d, DATEADD(month, -6, @d))
qid & accept id:
(3951413, 3951429)
query:
How can I find and replace in MySQL?
soup:
UPDATE mytable \n SET server_path = REPLACE(server_path,'/home/','/new_home/');\n
\n\nEdit:
\nIf you need to update multiple fields you can string them along—with commas in between—in that same UPDATE statement, e.g.:
\nUPDATE mytable \n SET mycol1 = REPLACE(mycol1,'/home/','/new_home/'), \n mycol2 = REPLACE(mycol2,'/home/','/new_home/');\n
\n
soup wrap:
UPDATE mytable
SET server_path = REPLACE(server_path,'/home/','/new_home/');
Edit:
If you need to update multiple fields you can string them along—with commas in between—in that same UPDATE statement, e.g.:
UPDATE mytable
SET mycol1 = REPLACE(mycol1,'/home/','/new_home/'),
mycol2 = REPLACE(mycol2,'/home/','/new_home/');
qid & accept id:
(4017878, 4017990)
query:
php do something for every record in the database
soup:
Try to avoid the loop at all costs. Think set based processing, which means handle the entire set of rows within one SQL command.
\nI'm not entirely sure what you are attempting to do, as your question is a little vague. however, here are two possibly ways to handle what you are trying to do using set based thinking.
\nYou can do a JOIN in an UPDATE, essentially selecting from the parent table and UPDATEing the child table for all rows in a single UPDATE command.
\nUPDATE c\n SET Col1=p.Col1\n FROM ParentTable p\n INNER JOIN ChildTable c On p.ParentID=c.ParentID\n WHERE ...\n
\nyou can also INSERT based on a SELECT, so you would create one row from each row returned in the SELECT, like:
\nINSERT INTO ChildTable\n (Col1, Col2, Col3, Col4)\n SELECT\n p.ColA, p.ColB, 'constant value', p.ColC-p.ColD\n FROM ParentTable p\n WHERE... \n
\n
soup wrap:
Try to avoid the loop at all costs. Think set based processing, which means handle the entire set of rows within one SQL command.
I'm not entirely sure what you are attempting to do, as your question is a little vague. however, here are two possibly ways to handle what you are trying to do using set based thinking.
You can do a JOIN in an UPDATE, essentially selecting from the parent table and UPDATEing the child table for all rows in a single UPDATE command.
UPDATE c
SET Col1=p.Col1
FROM ParentTable p
INNER JOIN ChildTable c On p.ParentID=c.ParentID
WHERE ...
you can also INSERT based on a SELECT, so you would create one row from each row returned in the SELECT, like:
INSERT INTO ChildTable
(Col1, Col2, Col3, Col4)
SELECT
p.ColA, p.ColB, 'constant value', p.ColC-p.ColD
FROM ParentTable p
WHERE...
qid & accept id:
(4038960, 4038974)
query:
Basic MySQL Table Join?
soup:
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` \n FROM `ASSOCIATION_TABLE` at\n LEFT OUTER JOIN OFFICE_TABLE ot\n ON ot.id = at.office\n WHERE `association`.`customer`=4;\n
\nThat's an outer join to OFFICE_TABLE. Your resultset will include any records in the ASSOCIATION_TABLE that do not have records in OFFICE_TABLE.
\nIf you only want to return results with records in OFFICE_TABLE you will want an inner join, e.g.:
\nSELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` \n FROM `ASSOCIATION_TABLE` at\n INNER JOIN OFFICE_TABLE ot\n ON ot.id = at.office\n WHERE `association`.`customer`=4;\n
\n
soup wrap:
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id`
FROM `ASSOCIATION_TABLE` at
LEFT OUTER JOIN OFFICE_TABLE ot
ON ot.id = at.office
WHERE `association`.`customer`=4;
That's an outer join to OFFICE_TABLE. Your resultset will include any records in the ASSOCIATION_TABLE that do not have records in OFFICE_TABLE.
If you only want to return results with records in OFFICE_TABLE you will want an inner join, e.g.:
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id`
FROM `ASSOCIATION_TABLE` at
INNER JOIN OFFICE_TABLE ot
ON ot.id = at.office
WHERE `association`.`customer`=4;
qid & accept id:
(4062845, 4063011)
query:
How can I save semantic information in a MySQL table?
soup:
You're working on a hard and interesting problem! You may get some interesting ideas from looking at the Dublin Core Metadata Initiative.
\nhttp://dublincore.org/metadata-basics/
\nTo make it simple, think of your metadata items as all fitting in one table.
\ne.g.
\nBallmer employed-by Microsoft\nBallmer is-a Person\nMicrosoft is-a Organization\nMicrosoft run-by Ballmer\nSoftImage acquired-by Microsoft\nSoftImage is-a Organization\nJoel Spolsky is-a Person\nJoel Spolsky formerly-employed-by Microsoft\nSpolsky, Joel dreamed-up StackOverflow\nStackOverflow is-a Website\nSocrates is-a Person\nSocrates died-on (some date)\n
\nThe trick here is that some, but not all, your first and third column values need to be BOTH arbitrary text AND serve as indexes into the first and third columns. Then, if you're trying to figure out what your data base has on Spolsky, you can full-text search your first and third columns for his name. You'll get out a bunch of triplets. The values you find will tell you a lot. If you want to know more, you can search again.
\nTo pull this off you'll probably need to have five columns, as follows:
\nFull text subject (whatever your user puts in)\nCanonical subject (what your user puts in, massaged into a standard form)\nRelation (is-a etc)\nFull text object\nCanonical object\n
\nThe point of the canonical forms of your subject and object is to allow queries like this to work, even if your user puts in "Joel Spolsky" and "Spolsky, Joel" in two different places even if they mean the same person.
\nSELECT * \n FROM relationships a\n JOIN relationships b (ON a.canonical_object = b.canonical_subject)\n WHERE MATCH (subject,object) AGAINST ('Spolsky')\n
\n
soup wrap:
You're working on a hard and interesting problem! You may get some interesting ideas from looking at the Dublin Core Metadata Initiative.
http://dublincore.org/metadata-basics/
To make it simple, think of your metadata items as all fitting in one table.
e.g.
Ballmer employed-by Microsoft
Ballmer is-a Person
Microsoft is-a Organization
Microsoft run-by Ballmer
SoftImage acquired-by Microsoft
SoftImage is-a Organization
Joel Spolsky is-a Person
Joel Spolsky formerly-employed-by Microsoft
Spolsky, Joel dreamed-up StackOverflow
StackOverflow is-a Website
Socrates is-a Person
Socrates died-on (some date)
The trick here is that some, but not all, your first and third column values need to be BOTH arbitrary text AND serve as indexes into the first and third columns. Then, if you're trying to figure out what your data base has on Spolsky, you can full-text search your first and third columns for his name. You'll get out a bunch of triplets. The values you find will tell you a lot. If you want to know more, you can search again.
To pull this off you'll probably need to have five columns, as follows:
Full text subject (whatever your user puts in)
Canonical subject (what your user puts in, massaged into a standard form)
Relation (is-a etc)
Full text object
Canonical object
The point of the canonical forms of your subject and object is to allow queries like this to work, even if your user puts in "Joel Spolsky" and "Spolsky, Joel" in two different places even if they mean the same person.
SELECT *
FROM relationships a
JOIN relationships b (ON a.canonical_object = b.canonical_subject)
WHERE MATCH (subject,object) AGAINST ('Spolsky')
qid & accept id:
(4062865, 4062914)
query:
Adding a unique row count to a SQL 2008 "for xml path" statement?
soup:
You could alias @@rowcount to '@id', like:
\ndeclare @t table (name varchar(25))\n\ninsert @t (name) values ('jddjdjd')\n\nselect @@rowcount as '@id'\n, name\nfrom @t\nfor xml path('row'), root('rows')\n
\nThis prints:
\n\n \n jddjdjd \n
\n \n
\nHowever, I'm not sure it's clearly defined what @@rowcount means at the point where it gets turned into an attribute.
\n
soup wrap:
You could alias @@rowcount to '@id', like:
declare @t table (name varchar(25))
insert @t (name) values ('jddjdjd')
select @@rowcount as '@id'
, name
from @t
for xml path('row'), root('rows')
This prints:
jddjdjd
However, I'm not sure it's clearly defined what @@rowcount means at the point where it gets turned into an attribute.
qid & accept id:
(4212229, 4212279)
query:
Deleting dynamically managed tables in MySQL
soup:
you can run this query and get all the sql queries that you need to run;
\nselect concat( 'drop table ', a.table_name, ';' )\nfrom information_schema.tables a \nwhere a.table_name like 'dynamic_%';\n
\nyou can insert it to file like
\nINTO OUTFILE '/tmp/delete.sql';\n
\nupdate according to alexandre comment
\nSET @v = ( select concat( 'drop table ', group_concat(a.table_name))\n from information_schema.tables a \n where a.table_name like 'dynamic_%'\n AND a.table_schema = DATABASE()\n;);\n PREPARE s FROM @v; \nEXECUTE s;\n
\n
soup wrap:
you can run this query and get all the sql queries that you need to run;
select concat( 'drop table ', a.table_name, ';' )
from information_schema.tables a
where a.table_name like 'dynamic_%';
you can insert it to file like
INTO OUTFILE '/tmp/delete.sql';
update according to alexandre comment
SET @v = ( select concat( 'drop table ', group_concat(a.table_name))
from information_schema.tables a
where a.table_name like 'dynamic_%'
AND a.table_schema = DATABASE()
;);
PREPARE s FROM @v;
EXECUTE s;
qid & accept id:
(4225984, 4226581)
query:
"Pivoting" non-aggregate data in SQL Server
soup:
To get the basic numbered-role data, we might start with
\nSELECT\n org_nbr\n , r1.assoc_id role1_ID\n , r1.last_name role1_name\n , r2.assoc_id role2_ID\n , r2.last_name role2_name\n , r3.assoc_id role3_ID\n , r3.last_name role3_name\n , r4.assoc_id role4_ID\n , r4.last_name role4_name\n , r5.assoc_id role5_ID\n , r5.last_name role5_name\n , r6.assoc_id role6_ID\n , r6.last_name role6_name\nFROM\n ASSOC_ROLE ar\n LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id\n LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id\n LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id\n LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id\n LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id\n LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id\n
\nBUT this will give us, for each org_nbr, a separate row for each role_id that has data! Which is not what we want - so we need to GROUP BY org_nbr. But then we need to either GROUP BY or aggregate over every column in the SELECT list! The trick then is to come up with an aggregate function that will placate SQL Server and give us the results we want. In this case, MIN will do the job:
\nSELECT\n org_nbr\n , MIN(r1.assoc_id) role1_ID\n , MIN(r1.last_name) role1_name\n , MIN(r2.assoc_id) role2_ID\n , MIN(r2.last_name) role2_name\n , MIN(r3.assoc_id) role3_ID\n , MIN(r3.last_name) role3_name\n , MIN(r4.assoc_id) role4_ID\n , MIN(r4.last_name) role4_name\n , MIN(r5.assoc_id) role5_ID\n , MIN(r5.last_name) role5_name\n , MIN(r6.assoc_id) role6_ID\n , MIN(r6.last_name) role6_name\nFROM\n ASSOC_ROLE ar\n LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id\n LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id\n LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id\n LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id\n LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id\n LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id\nGROUP BY\n org_nbr\n
\nOutput:
\norg_nbr role1_ID role1_name role2_ID role2_name role3_ID role3_name role4_ID role4_name role5_ID role5_name role6_ID role6_name\n---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ----------\n1AA 1447 Cooper NULL NULL 1448 Collins 1448 Collins 1448 Collins 1449 Lynch\nWarning: Null value is eliminated by an aggregate or other SET operation.\n
\nOf course this will fall short should the maximum role_id increase...
\n
soup wrap:
To get the basic numbered-role data, we might start with
SELECT
org_nbr
, r1.assoc_id role1_ID
, r1.last_name role1_name
, r2.assoc_id role2_ID
, r2.last_name role2_name
, r3.assoc_id role3_ID
, r3.last_name role3_name
, r4.assoc_id role4_ID
, r4.last_name role4_name
, r5.assoc_id role5_ID
, r5.last_name role5_name
, r6.assoc_id role6_ID
, r6.last_name role6_name
FROM
ASSOC_ROLE ar
LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id
LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id
LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id
LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id
LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id
LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id
BUT this will give us, for each org_nbr, a separate row for each role_id that has data! Which is not what we want - so we need to GROUP BY org_nbr. But then we need to either GROUP BY or aggregate over every column in the SELECT list! The trick then is to come up with an aggregate function that will placate SQL Server and give us the results we want. In this case, MIN will do the job:
SELECT
org_nbr
, MIN(r1.assoc_id) role1_ID
, MIN(r1.last_name) role1_name
, MIN(r2.assoc_id) role2_ID
, MIN(r2.last_name) role2_name
, MIN(r3.assoc_id) role3_ID
, MIN(r3.last_name) role3_name
, MIN(r4.assoc_id) role4_ID
, MIN(r4.last_name) role4_name
, MIN(r5.assoc_id) role5_ID
, MIN(r5.last_name) role5_name
, MIN(r6.assoc_id) role6_ID
, MIN(r6.last_name) role6_name
FROM
ASSOC_ROLE ar
LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id
LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id
LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id
LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id
LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id
LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id
GROUP BY
org_nbr
Output:
org_nbr role1_ID role1_name role2_ID role2_name role3_ID role3_name role4_ID role4_name role5_ID role5_name role6_ID role6_name
---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ----------
1AA 1447 Cooper NULL NULL 1448 Collins 1448 Collins 1448 Collins 1449 Lynch
Warning: Null value is eliminated by an aggregate or other SET operation.
Of course this will fall short should the maximum role_id increase...
qid & accept id:
(4226144, 4226200)
query:
Delete row when a table has an FK relationship
soup:
delete \n from projects \n where documentsFK = (\n select documentFK \n from documents \n where documentsFK > 125\n );\n\ndelete \n from documents \n where documentsFK > 125;\n
\nEDIT
\ndelete \n from projects \n where documentsFK in (\n select documentFK \n from documents \n where documentsFK > 125\n );\n\ndelete \n from documents \n where documentsFK > 125;\n
\n
soup wrap:
delete
from projects
where documentsFK = (
select documentFK
from documents
where documentsFK > 125
);
delete
from documents
where documentsFK > 125;
EDIT
delete
from projects
where documentsFK in (
select documentFK
from documents
where documentsFK > 125
);
delete
from documents
where documentsFK > 125;
qid & accept id:
(4257442, 4257582)
query:
SQL Server How to persist and use a time across different time zones
soup:
In SQL Server 2008, use the DATETIMEOFFSET data type which is a DATETIME plus a timezone offset included.
\nSELECT CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset) \n
\nwould be Nov 23, 2010, 4:35pm in a +9 hour (from GMT) timezone.
\nSQL Server 2008 also contains functions and SQL commands to convert DATETIMEOFFSET values from one timezone to another:
\nSELECT \nSWITCHOFFSET(CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset), '+01:00')\n
\nwould result in:
\n2010-11-23 08:35:29.0000000 +01:00\n
\nSame time, different timezone (+1 hour from GMT)
\n
soup wrap:
In SQL Server 2008, use the DATETIMEOFFSET data type which is a DATETIME plus a timezone offset included.
SELECT CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset)
would be Nov 23, 2010, 4:35pm in a +9 hour (from GMT) timezone.
SQL Server 2008 also contains functions and SQL commands to convert DATETIMEOFFSET values from one timezone to another:
SELECT
SWITCHOFFSET(CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset), '+01:00')
would result in:
2010-11-23 08:35:29.0000000 +01:00
Same time, different timezone (+1 hour from GMT)
qid & accept id:
(4283031, 4283064)
query:
how to get last date form DB table mysql
soup:
A. This answers 'where date is the closest date from now...':
\nSELECT *\nFROM `categoriesSupports`\nWHERE `date` IN (\n SELECT `date`\n FROM `categoriesSupports`\n ORDER BY `date` DESC\n LIMIT 1\n)\n
\nNotes:
\n\n- You can set
LIMIT n to select entries for more dates. \n- If you only want for the last date you can replace
IN with = because the sub-select will return only one value. \n- If your table includes future dates replace
ORDER BY date DESC with ORDER BY ABS(NOW() - date) ASC. \n
\n
\nA solution with JOINS. Will work only if you have past dates.
\nSELECT a.*\nFROM `categoriesSupports` AS a\nLEFT JOIN `categoriesSupports` AS b\n ON b.date > a.date\nWHERE b.id IS NULL\n
\nAdded just for reference.
\n
\nB. This answers 'where date is in the last 3 days (including today)':
\nSELECT *\nFROM `categoriesSupports`\nWHERE DATEDIFF(NOW(), `date`) < 3\n
\nReplace 3 with any number if you want more or less days.
\n
\nC. Same as A., but per support id
\nSELECT a.*\nFROM `categoriesSupports` AS a\nLEFT JOIN `categoriesSupports` AS b\n ON b.support_id = a.support_id AND b.date > a.date\nWHERE b.id IS NULL\n
\nThis answers the latest version of the question.
\n
soup wrap:
A. This answers 'where date is the closest date from now...':
SELECT *
FROM `categoriesSupports`
WHERE `date` IN (
SELECT `date`
FROM `categoriesSupports`
ORDER BY `date` DESC
LIMIT 1
)
Notes:
- You can set
LIMIT n to select entries for more dates.
- If you only want for the last date you can replace
IN with = because the sub-select will return only one value.
- If your table includes future dates replace
ORDER BY date DESC with ORDER BY ABS(NOW() - date) ASC.
A solution with JOINS. Will work only if you have past dates.
SELECT a.*
FROM `categoriesSupports` AS a
LEFT JOIN `categoriesSupports` AS b
ON b.date > a.date
WHERE b.id IS NULL
Added just for reference.
B. This answers 'where date is in the last 3 days (including today)':
SELECT *
FROM `categoriesSupports`
WHERE DATEDIFF(NOW(), `date`) < 3
Replace 3 with any number if you want more or less days.
C. Same as A., but per support id
SELECT a.*
FROM `categoriesSupports` AS a
LEFT JOIN `categoriesSupports` AS b
ON b.support_id = a.support_id AND b.date > a.date
WHERE b.id IS NULL
This answers the latest version of the question.
qid & accept id:
(4301603, 4301887)
query:
Month name in sql server 2008
soup:
SELECT DATENAME(month, ) AS "Month Name" FROM \n
\nEx:
\nSELECT DATENAME(month, JoinDate) AS "Month Name" FROM EMPLOYEE\n
\nThis value would return the monthname corresponding to the date value in the field JoinDate from the table EMPLOYEE.
\n
soup wrap:
SELECT DATENAME(month, ) AS "Month Name" FROM
Ex:
SELECT DATENAME(month, JoinDate) AS "Month Name" FROM EMPLOYEE
This value would return the monthname corresponding to the date value in the field JoinDate from the table EMPLOYEE.
qid & accept id:
(4352912, 4353096)
query:
Select distinct not-null rows SQL server 2005
soup:
This works, don't know if it can be made any simpler
\nSELECT ID1, ID2, ID3, ID4, ID5\nFROM IDS OUTT\nWHERE NOT EXISTS (SELECT 1\n FROM IDS INN\n WHERE OUTT.ID != INN.ID AND\n (ISNULL(OUTT.ID1, INN.ID1) = INN.ID1 OR (INN.ID1 IS NULL AND OUTT.ID1 IS NULL)) AND\n (ISNULL(OUTT.ID2, INN.ID2) = INN.ID2 OR (INN.ID2 IS NULL AND OUTT.ID2 IS NULL)) AND\n (ISNULL(OUTT.ID3, INN.ID3) = INN.ID3 OR (INN.ID3 IS NULL AND OUTT.ID3 IS NULL)) AND\n (ISNULL(OUTT.ID4, INN.ID4) = INN.ID4 OR (INN.ID4 IS NULL AND OUTT.ID4 IS NULL)) AND\n (ISNULL(OUTT.ID5, INN.ID5) = INN.ID5 OR (INN.ID5 IS NULL AND OUTT.ID5 IS NULL)))\n
\nEDIT: Found a sweeter alternative, if your ids never have negative numbers
\nSELECT ID1, ID2, ID3, ID4, ID5\nFROM IDS OUTT\nWHERE NOT EXISTS (SELECT 1\n FROM IDS INN\n WHERE OUTT.ID != INN.ID AND\n coalesce(OUTT.ID1, INN.ID1,-1) = isnull(INN.ID1,-1) AND\n coalesce(OUTT.ID2, INN.ID2,-1) = isnull(INN.ID2,-1) AND\n coalesce(OUTT.ID3, INN.ID3,-1) = isnull(INN.ID3,-1) AND\n coalesce(OUTT.ID4, INN.ID4,-1) = isnull(INN.ID4,-1) AND\n coalesce(OUTT.ID5, INN.ID5,-1) = isnull(INN.ID5,-1)) \n
\nEDIT2: There is one case where it won't work - in case two rows (with different ids) have exact same form. I am assuming that it is not there. If such a thing is present, then first create a view with a select distinct on the base table first, and then apply this query.
\n
soup wrap:
This works, don't know if it can be made any simpler
SELECT ID1, ID2, ID3, ID4, ID5
FROM IDS OUTT
WHERE NOT EXISTS (SELECT 1
FROM IDS INN
WHERE OUTT.ID != INN.ID AND
(ISNULL(OUTT.ID1, INN.ID1) = INN.ID1 OR (INN.ID1 IS NULL AND OUTT.ID1 IS NULL)) AND
(ISNULL(OUTT.ID2, INN.ID2) = INN.ID2 OR (INN.ID2 IS NULL AND OUTT.ID2 IS NULL)) AND
(ISNULL(OUTT.ID3, INN.ID3) = INN.ID3 OR (INN.ID3 IS NULL AND OUTT.ID3 IS NULL)) AND
(ISNULL(OUTT.ID4, INN.ID4) = INN.ID4 OR (INN.ID4 IS NULL AND OUTT.ID4 IS NULL)) AND
(ISNULL(OUTT.ID5, INN.ID5) = INN.ID5 OR (INN.ID5 IS NULL AND OUTT.ID5 IS NULL)))
EDIT: Found a sweeter alternative, if your ids never have negative numbers
SELECT ID1, ID2, ID3, ID4, ID5
FROM IDS OUTT
WHERE NOT EXISTS (SELECT 1
FROM IDS INN
WHERE OUTT.ID != INN.ID AND
coalesce(OUTT.ID1, INN.ID1,-1) = isnull(INN.ID1,-1) AND
coalesce(OUTT.ID2, INN.ID2,-1) = isnull(INN.ID2,-1) AND
coalesce(OUTT.ID3, INN.ID3,-1) = isnull(INN.ID3,-1) AND
coalesce(OUTT.ID4, INN.ID4,-1) = isnull(INN.ID4,-1) AND
coalesce(OUTT.ID5, INN.ID5,-1) = isnull(INN.ID5,-1))
EDIT2: There is one case where it won't work - in case two rows (with different ids) have exact same form. I am assuming that it is not there. If such a thing is present, then first create a view with a select distinct on the base table first, and then apply this query.
qid & accept id:
(4400347, 4400444)
query:
How to get a of count of items for multiple tables
soup:
To get counts by ip and by day, the easiest way is to flatten the query:
\nSELECT 'day1' AS day, srcIP, count(*) AS count FROM Day1 GROUP BY srcIP\nUNION\nSELECT 'day2' AS day, srcIP, count(*) AS count FROM Day2 GROUP BY srcIP\nUNION\nSELECT 'day3' AS day, srcIP, count(*) AS count FROM Day3 GROUP BY srcIP\n
\nand then transpose it in your app to get the table format you want.
\nAlternatively
\nYou can also do it by joining on IP:
\nSELECT srcIP, d1.count, d2.count, d3.count\nFROM (SELECT srcIP, count(*) AS count FROM Day1 GROUP BY srcIP) d1\nLEFT JOIN (SELECT srcIP, count(*) AS count FROM Day2 GROUP BY srcIP) d2 USING (srcIP)\nLEFT JOIN (SELECT srcIP, count(*) AS count FROM Day3 GROUP BY srcIP) d3 USING (srcIP)\n
\nBut here you will be missing IPs that are not in Day1, unless you first do a SELECT DISTINCT srcIP from a UNION of all days, which is pretty expensive. Basically this table structure doesn't lend itself too easily to this kind of aggregation.
\n
soup wrap:
To get counts by ip and by day, the easiest way is to flatten the query:
SELECT 'day1' AS day, srcIP, count(*) AS count FROM Day1 GROUP BY srcIP
UNION
SELECT 'day2' AS day, srcIP, count(*) AS count FROM Day2 GROUP BY srcIP
UNION
SELECT 'day3' AS day, srcIP, count(*) AS count FROM Day3 GROUP BY srcIP
and then transpose it in your app to get the table format you want.
Alternatively
You can also do it by joining on IP:
SELECT srcIP, d1.count, d2.count, d3.count
FROM (SELECT srcIP, count(*) AS count FROM Day1 GROUP BY srcIP) d1
LEFT JOIN (SELECT srcIP, count(*) AS count FROM Day2 GROUP BY srcIP) d2 USING (srcIP)
LEFT JOIN (SELECT srcIP, count(*) AS count FROM Day3 GROUP BY srcIP) d3 USING (srcIP)
But here you will be missing IPs that are not in Day1, unless you first do a SELECT DISTINCT srcIP from a UNION of all days, which is pretty expensive. Basically this table structure doesn't lend itself too easily to this kind of aggregation.
qid & accept id:
(4429428, 4429746)
query:
Passing the tablename to the cursor
soup:
To expand on JackPDouglas' answer, you cannot utilize a param name as the [table] name in a cursor. You must utilize dynamic sql into a REF CURSOR
\nhttp://download.oracle.com/docs/cd/B10500_01/appdev.920/a96590/adg09dyn.htm#24492
\nCREATE OR REPLACE PROCEDURE dynaQuery(\n TAB IN VARCHAR2, \n sid in number ,\n cur OUT NOCOPY sys_refcursor) IS\n query_str VARCHAR2(200);\nBEGIN\n query_str := 'SELECT USERNAME FROM ' || tab\n || ' WHERE sid= :id';\ndbms_output.put_line(query_str);\n OPEN cur FOR query_str USING sid;\nEND ;\n/\n
\nCommence Example
\ncreate table test1(sid number, username varchar2(50));\ninsert into test1(sid, username) values(123,'abc');\ninsert into test1(sid, username) values(123,'ddd');\ninsert into test1(sid, username) values(222,'abc');\ncommit;\n/\n\n\n\n declare \n cur sys_refcursor ;\n sid number ;\n uName varchar2(50) ;\n begin\n sid := 123; \n dynaQuery('test1',sid, cur);\n LOOP\n FETCH cur INTO uName;\n DBMS_OUTPUT.put_line(uName);\n EXIT WHEN cur%NOTFOUND;\n -- process row here\n END LOOP;\nCLOSE CUR;\n\n\n end ;\n
\nOutput:
\nSELECT USERNAME FROM test1 WHERE sid= :id\nabc\nddd\nabc\nddd\nddd\n
\nEDIT: Added Close CUR that was rightly suggested by @JackPDouglas
\n
soup wrap:
To expand on JackPDouglas' answer, you cannot utilize a param name as the [table] name in a cursor. You must utilize dynamic sql into a REF CURSOR
http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96590/adg09dyn.htm#24492
CREATE OR REPLACE PROCEDURE dynaQuery(
TAB IN VARCHAR2,
sid in number ,
cur OUT NOCOPY sys_refcursor) IS
query_str VARCHAR2(200);
BEGIN
query_str := 'SELECT USERNAME FROM ' || tab
|| ' WHERE sid= :id';
dbms_output.put_line(query_str);
OPEN cur FOR query_str USING sid;
END ;
/
Commence Example
create table test1(sid number, username varchar2(50));
insert into test1(sid, username) values(123,'abc');
insert into test1(sid, username) values(123,'ddd');
insert into test1(sid, username) values(222,'abc');
commit;
/
declare
cur sys_refcursor ;
sid number ;
uName varchar2(50) ;
begin
sid := 123;
dynaQuery('test1',sid, cur);
LOOP
FETCH cur INTO uName;
DBMS_OUTPUT.put_line(uName);
EXIT WHEN cur%NOTFOUND;
-- process row here
END LOOP;
CLOSE CUR;
end ;
Output:
SELECT USERNAME FROM test1 WHERE sid= :id
abc
ddd
abc
ddd
ddd
EDIT: Added Close CUR that was rightly suggested by @JackPDouglas
qid & accept id:
(4434581, 4434608)
query:
SQL Query to check if student1 has a course with student 2
soup:
Try a self-join:
\nSELECT T1.id_group\nFROM jos_gj_users T1\nJOIN jos_gj_users T2\nON T1.id_group = T2.id_group\nWHERE T1.id_user = 20\nAND T2.id_user = 21\n
\nTo just get a "true or false" result you can check from the client to see if at least one row exists in the result set rather than fetching the entire results.
\nAlternatively you can do it in SQL by wrapping the above query in another SELECT that uses EXISTS:
\nSELECT CASE WHEN EXISTS\n(\n SELECT T1.id_group\n FROM jos_gj_users T1\n JOIN jos_gj_users T2\n ON T1.id_group = T2.id_group\n WHERE T1.id_user = 20\n AND T2.id_user = 21\n) THEN 1 ELSE 0 END AS result\n
\nThis query returns either 0 (false) or 1 (true).
\n
soup wrap:
Try a self-join:
SELECT T1.id_group
FROM jos_gj_users T1
JOIN jos_gj_users T2
ON T1.id_group = T2.id_group
WHERE T1.id_user = 20
AND T2.id_user = 21
To just get a "true or false" result you can check from the client to see if at least one row exists in the result set rather than fetching the entire results.
Alternatively you can do it in SQL by wrapping the above query in another SELECT that uses EXISTS:
SELECT CASE WHEN EXISTS
(
SELECT T1.id_group
FROM jos_gj_users T1
JOIN jos_gj_users T2
ON T1.id_group = T2.id_group
WHERE T1.id_user = 20
AND T2.id_user = 21
) THEN 1 ELSE 0 END AS result
This query returns either 0 (false) or 1 (true).
qid & accept id:
(4441599, 4441664)
query:
How do I join an unknown number of rows to another row?
soup:
You need to use a Dynamic PIVOT clause in order to do this.
\nEDIT:
\nOk so I've done some playing around and based on the following sample data:
\nCreate Table TableA\n(\nIDCol int,\nSomeValue varchar(50)\n)\nCreate Table TableB\n(\nIDCol int,\nKEYCol int,\nValue varchar(50)\n)\n\nInsert into TableA\nValues (1, '123223')\nInsert Into TableA\nValues (2,'1232ff')\nInsert into TableA\nValues (3, '222222')\n\nInsert Into TableB\nValues( 23, 1, 435)\nInsert Into TableB\nValues( 24, 1, 436)\n\nInsert Into TableB\nValues( 25, 3, 45)\nInsert Into TableB\nValues( 26, 3, 46)\n\nInsert Into TableB\nValues( 27, 3, 435)\nInsert Into TableB\nValues( 28, 3, 437)\n
\nYou can execute the following Dynamic SQL.
\ndeclare @sql varchar(max)\ndeclare @pivot_list varchar(max)\ndeclare @pivot_select varchar(max)\n\nSelect \n @pivot_list = Coalesce(@Pivot_List + ', ','') + '[' + Value +']',\n @Pivot_select = Coalesce(@pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'\nFrom \n(\nSelect distinct Value From dbo.TableB \n)PivotCodes\n\nSet @Sql = '\n;With p as (\n\nSelect a.IdCol,\n a.SomeValue,\n b.Value\nFrom dbo.TableA a\nLeft Join dbo.TableB b on a.IdCol = b.KeyCol\n)\nSelect IdCol, SomeValue ' + Left(@pivot_select, Len(@Pivot_Select)-1) + '\nFrom p\nPivot ( Max(Value) for Value in (' + @pivot_list + '\n )\n )as pvt\n'\n\nexec (@sql)\n
\nThis gives you the following output:
\n
\nAlthough this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!
\nGood luck!
\n
soup wrap:
You need to use a Dynamic PIVOT clause in order to do this.
EDIT:
Ok so I've done some playing around and based on the following sample data:
Create Table TableA
(
IDCol int,
SomeValue varchar(50)
)
Create Table TableB
(
IDCol int,
KEYCol int,
Value varchar(50)
)
Insert into TableA
Values (1, '123223')
Insert Into TableA
Values (2,'1232ff')
Insert into TableA
Values (3, '222222')
Insert Into TableB
Values( 23, 1, 435)
Insert Into TableB
Values( 24, 1, 436)
Insert Into TableB
Values( 25, 3, 45)
Insert Into TableB
Values( 26, 3, 46)
Insert Into TableB
Values( 27, 3, 435)
Insert Into TableB
Values( 28, 3, 437)
You can execute the following Dynamic SQL.
declare @sql varchar(max)
declare @pivot_list varchar(max)
declare @pivot_select varchar(max)
Select
@pivot_list = Coalesce(@Pivot_List + ', ','') + '[' + Value +']',
@Pivot_select = Coalesce(@pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'
From
(
Select distinct Value From dbo.TableB
)PivotCodes
Set @Sql = '
;With p as (
Select a.IdCol,
a.SomeValue,
b.Value
From dbo.TableA a
Left Join dbo.TableB b on a.IdCol = b.KeyCol
)
Select IdCol, SomeValue ' + Left(@pivot_select, Len(@Pivot_Select)-1) + '
From p
Pivot ( Max(Value) for Value in (' + @pivot_list + '
)
)as pvt
'
exec (@sql)
This gives you the following output:

Although this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!
Good luck!
qid & accept id:
(4459902, 4460148)
query:
is it possible to have alphanumeric sequence generator in sql
soup:
You could create a function like this:
\ncreate function to_base_36 (n integer) return varchar2\nis\n q integer;\n r varchar2(100);\nbegin\n q := n;\n while q >= 36 loop\n r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;\n q := floor(q/36);\n end loop;\n r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;\n return lpad(r,4,'0');\nend;\n
\nand then use it like this:
\nselect rownum, to_base_36(rownum)\nfrom dual\nconnect by level < 36*36*36*36;\n
\nOr, without creating a function:
\nwith digits as\n( select n, chr(mod(n,36)+case when mod(n,36) < 10 then 48 else 55 end) d\n from (Select rownum-1 as n from dual connect by level < 37)\n)\nselect d1.n*36*36*36 + d2.n*36*36 + d3.n*36 + d4.n, d1.d||d2.d||d3.d||d4.d\nfrom digits d1, digits d2, digits d3, digits d4\n
\n
soup wrap:
You could create a function like this:
create function to_base_36 (n integer) return varchar2
is
q integer;
r varchar2(100);
begin
q := n;
while q >= 36 loop
r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;
q := floor(q/36);
end loop;
r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;
return lpad(r,4,'0');
end;
and then use it like this:
select rownum, to_base_36(rownum)
from dual
connect by level < 36*36*36*36;
Or, without creating a function:
with digits as
( select n, chr(mod(n,36)+case when mod(n,36) < 10 then 48 else 55 end) d
from (Select rownum-1 as n from dual connect by level < 37)
)
select d1.n*36*36*36 + d2.n*36*36 + d3.n*36 + d4.n, d1.d||d2.d||d3.d||d4.d
from digits d1, digits d2, digits d3, digits d4
qid & accept id:
(4521020, 4521199)
query:
Calculate open timeslots given availability and existing appointments - by day
soup:
You need to discretize your time. Choose a time interval to use as your atom. Based on your example, that should probably be a half hour.
\nNow
\nCreate table Availability (person_id int, interval_id int);\nCreate table Appointment (person_id int, interval_id int, appointment_desc text);\n
\nI'm leaving out the primary keys, and there should be foreign keys to lookup tables for Person and Interval.
\nThere will be an Interval table for looking up what each interval_id stands for.
\nCreate table Interval(interval_id int primary key, interval_start datetime, interval_end datetime)\n
\nPopulate the Interval table with every interval you're going to have in your calendar. Populating it might be a chore, but you can create the actual values in Excel, then paste them into your Interval table.
\nNow you can find free intervals as
\nSelect person_id, interval_id from Availability av\nleft join Appointment ap\non av.person_id = ap.person_id and av.interval_id = ap.interval_id\nwhere ap.interval_id is null\n
\nMSSQL can do this kind of outer join in no time (provided you set up the keys), and you can include the list of free intervals in the pages you send, with javascript to display them when and as desired.
\n
soup wrap:
You need to discretize your time. Choose a time interval to use as your atom. Based on your example, that should probably be a half hour.
Now
Create table Availability (person_id int, interval_id int);
Create table Appointment (person_id int, interval_id int, appointment_desc text);
I'm leaving out the primary keys, and there should be foreign keys to lookup tables for Person and Interval.
There will be an Interval table for looking up what each interval_id stands for.
Create table Interval(interval_id int primary key, interval_start datetime, interval_end datetime)
Populate the Interval table with every interval you're going to have in your calendar. Populating it might be a chore, but you can create the actual values in Excel, then paste them into your Interval table.
Now you can find free intervals as
Select person_id, interval_id from Availability av
left join Appointment ap
on av.person_id = ap.person_id and av.interval_id = ap.interval_id
where ap.interval_id is null
MSSQL can do this kind of outer join in no time (provided you set up the keys), and you can include the list of free intervals in the pages you send, with javascript to display them when and as desired.
qid & accept id:
(4589157, 4589195)
query:
If and only if condition SQL -- SQL server 2008
soup:
For a "complete" pull:
\nSELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....\nFROM pro_Profile AS p\n LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID\nWHERE p.profileID NOT IN\n (\n SELECT profileID\n FROM mod_userStatus\n )\n;\n
\nFor a single "profile" pull:
\nSELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....\nFROM pro_Profile AS p\n LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID\nWHERE p.profileID = ?\n AND p.profileID NOT IN \n (\n SELECT profileID\n FROM mod_userStatus\n WHERE profileID = ?\n )\n;\n
\nEDIT: Looked at the execution plan of using a LEFT OUTER JOIN for mod_userStatus and checking it's primary key for null VS a NOT IN statement in a similar setup. The NOT IN statement is indeed less costly.
\nThe LEFT OUTER JOIN performs a filter & hash match (Cost: 2.984):\n
\nWhile the NOT IN performs a merge join (Cost: 1.508):\n
\n
soup wrap:
For a "complete" pull:
SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....
FROM pro_Profile AS p
LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID
WHERE p.profileID NOT IN
(
SELECT profileID
FROM mod_userStatus
)
;
For a single "profile" pull:
SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....
FROM pro_Profile AS p
LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID
WHERE p.profileID = ?
AND p.profileID NOT IN
(
SELECT profileID
FROM mod_userStatus
WHERE profileID = ?
)
;
EDIT: Looked at the execution plan of using a LEFT OUTER JOIN for mod_userStatus and checking it's primary key for null VS a NOT IN statement in a similar setup. The NOT IN statement is indeed less costly.
The LEFT OUTER JOIN performs a filter & hash match (Cost: 2.984):

While the NOT IN performs a merge join (Cost: 1.508):

qid & accept id:
(4598659, 4598746)
query:
sql stored procedure loop
soup:
This is not directly an answer, but the code cannot be posted in a readible fashion in a comment, so I think this should be okay here:
\nDon't loop in SPs, rather use a CTE to generate the numbers you need.
\nDECLARE @YearToGet int;\nSET @YearToGet = 2005;\n\nWITH Years AS (\n SELECT DATEPART(year, GETDATE()) [Year]\n UNION ALL\n SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet\n)\nSELECT * FROM Years -- join here with your query\nOPTION (MAXRECURSION 0) -- this avoids hitting the recursion limit in the CTE\n
\nEdit: Try this
\nWITH Years\n AS (\n SELECT DATEPART(year, GETDATE()) [Year]\n UNION ALL\n SELECT [Year]-1\n FROM Years\n WHERE [Year] > @YearToGet\n )\n SELECT DIVISION, DYYYY, SUM(APRICE) AS Sales, SUM(PARTY) AS PAX, SUM(NetAmount) AS NetSales, SUM(InsAmount) AS InsSales, SUM(CancelRevenue) AS CXSales, SUM(OtherAmount) AS OtherSales, SUM(CXVALUE) AS CXValue\n FROM dbo.B101BookingsDetails \n JOIN Years yr ON DYYYY = yr.[Year]\n WHERE Booked <= CONVERT(int, DATEADD(year, DYYYY-YEAR(GETDATE()), DATEADD(day, DATEDIFF(day, 2, GETDATE()), 0)))\n GROUP BY DYYYY, DIVISION\n ORDER BY DIVISION, DYYYY\n OPTION (MAXRECURSION 0);\n
\n
soup wrap:
This is not directly an answer, but the code cannot be posted in a readible fashion in a comment, so I think this should be okay here:
Don't loop in SPs, rather use a CTE to generate the numbers you need.
DECLARE @YearToGet int;
SET @YearToGet = 2005;
WITH Years AS (
SELECT DATEPART(year, GETDATE()) [Year]
UNION ALL
SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet
)
SELECT * FROM Years -- join here with your query
OPTION (MAXRECURSION 0) -- this avoids hitting the recursion limit in the CTE
Edit: Try this
WITH Years
AS (
SELECT DATEPART(year, GETDATE()) [Year]
UNION ALL
SELECT [Year]-1
FROM Years
WHERE [Year] > @YearToGet
)
SELECT DIVISION, DYYYY, SUM(APRICE) AS Sales, SUM(PARTY) AS PAX, SUM(NetAmount) AS NetSales, SUM(InsAmount) AS InsSales, SUM(CancelRevenue) AS CXSales, SUM(OtherAmount) AS OtherSales, SUM(CXVALUE) AS CXValue
FROM dbo.B101BookingsDetails
JOIN Years yr ON DYYYY = yr.[Year]
WHERE Booked <= CONVERT(int, DATEADD(year, DYYYY-YEAR(GETDATE()), DATEADD(day, DATEDIFF(day, 2, GETDATE()), 0)))
GROUP BY DYYYY, DIVISION
ORDER BY DIVISION, DYYYY
OPTION (MAXRECURSION 0);
qid & accept id:
(4621932, 4623655)
query:
Oracle: How do I display DBMS_XMLDOM.DOMDocument for debugging?
soup:
DBMS_XMLDOM.WRITETOBUFFER Writes the contents of the node to a buffer.\nDBMS_XMLDOM.WRITETOCLOB Writes the contents of the node to a CLOB.\nDBMS_XMLDOM.WRITETOFILE Writes the contents of the node to a file.\n
\nI have PL/SQL code that wites it to the file system using a DIRECTORY:
\n dbms_xmldom.writeToFile(dbms_xmldom.newDOMDocument( xmldoc)\n ,'DATAPUMPDIR/myfile.xml') ;\n
\nI have created a function using dbms_xmldom.writetoclob
\n create or replace function xml2clob (xmldoc XMLType) return CLOB is\n clobdoc CLOB := ' ';\n begin\n dbms_xmldom.writeToClob(dbms_xmldom.newDOMDocument( xmldoc)\n ,clobdoc) ;\n return clobdoc;\n end;\n /\n
\nQuery:
\nSELECT xml2clob(Sys_Xmlagg(\n Xmlelement(Name "dummy"\n ,dummy\n ),Xmlformat('dual')))\n FROM dual;\n
\nOutput:
\n\n\n X \n \n
\nYou could try using a function like this:
\n create or replace function dom2clob (domdoc DBMS_XMLDOM.DOMDocument) return CLOB is\n clobdoc CLOB := ' ';\n begin\n dbms_xmldom.writeToClob(domdoc,clobdoc) ;\n return clobdoc;\n end;\n /\n
\n
soup wrap:
DBMS_XMLDOM.WRITETOBUFFER Writes the contents of the node to a buffer.
DBMS_XMLDOM.WRITETOCLOB Writes the contents of the node to a CLOB.
DBMS_XMLDOM.WRITETOFILE Writes the contents of the node to a file.
I have PL/SQL code that wites it to the file system using a DIRECTORY:
dbms_xmldom.writeToFile(dbms_xmldom.newDOMDocument( xmldoc)
,'DATAPUMPDIR/myfile.xml') ;
I have created a function using dbms_xmldom.writetoclob
create or replace function xml2clob (xmldoc XMLType) return CLOB is
clobdoc CLOB := ' ';
begin
dbms_xmldom.writeToClob(dbms_xmldom.newDOMDocument( xmldoc)
,clobdoc) ;
return clobdoc;
end;
/
Query:
SELECT xml2clob(Sys_Xmlagg(
Xmlelement(Name "dummy"
,dummy
),Xmlformat('dual')))
FROM dual;
Output:
X
You could try using a function like this:
create or replace function dom2clob (domdoc DBMS_XMLDOM.DOMDocument) return CLOB is
clobdoc CLOB := ' ';
begin
dbms_xmldom.writeToClob(domdoc,clobdoc) ;
return clobdoc;
end;
/
qid & accept id:
(4761507, 4761860)
query:
Matching first char in string to digit or non-standard character
soup:
Create links representing every letter and number. Clicking these links will provide the users with the results from the database that begin with the selected character.
\nSELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC;\n
\nConsider paginating these result pages into appropriate chunks. MySQL will let you do this with LIMIT
\nThis command will select the first 100 records from the desired character group:
\nSELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC\nLIMIT 0, 100;\n
\nThis command will select the second 100 records from the desired character group:
\nSELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC\nLIMIT 100, 100;\n
\nPer your comments, if you want to combine characters 0-9 without using regex, you will need to combine several OR statements:
\nSELECT title FROM table\nWHERE (\n LEFT(title,1) = '0'\n OR LEFT(title,1) = '1'\n ...\n )\nORDER BY title ASC;\n
\n
soup wrap:
Create links representing every letter and number. Clicking these links will provide the users with the results from the database that begin with the selected character.
SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC;
Consider paginating these result pages into appropriate chunks. MySQL will let you do this with LIMIT
This command will select the first 100 records from the desired character group:
SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC
LIMIT 0, 100;
This command will select the second 100 records from the desired character group:
SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC
LIMIT 100, 100;
Per your comments, if you want to combine characters 0-9 without using regex, you will need to combine several OR statements:
SELECT title FROM table
WHERE (
LEFT(title,1) = '0'
OR LEFT(title,1) = '1'
...
)
ORDER BY title ASC;
qid & accept id:
(4773206, 4773215)
query:
how to update using nested query in SQL
soup:
Give this a try
\nUpdate t\nSet t.yyyy = q.Name\nFrom TableToUpdate t\nJoin AddressTable q on q.Address = t.Address\n
\nThis assumes that Address field (which you are joining on) is a one to one relationship with the Address field in the table you are updating
\nThis can also be written
\nUpdate TableToUpdate\nSet yyyy = q.Name\nFrom AddressTable q\nWHERE q.Address = TableToUpdate.Address\n
\nSince the update table is accessible in the FROM/WHERE clauses, except it cannot be aliased.
\n
soup wrap:
Give this a try
Update t
Set t.yyyy = q.Name
From TableToUpdate t
Join AddressTable q on q.Address = t.Address
This assumes that Address field (which you are joining on) is a one to one relationship with the Address field in the table you are updating
This can also be written
Update TableToUpdate
Set yyyy = q.Name
From AddressTable q
WHERE q.Address = TableToUpdate.Address
Since the update table is accessible in the FROM/WHERE clauses, except it cannot be aliased.
qid & accept id:
(4787104, 4787136)
query:
How to Select and Order By columns not in Groupy By SQL statement - Oracle
soup:
It does not make sense to include columns that are not part of the GROUP BY clause. Consider if you have a MIN(X), MAX(Y) in the SELECT clause, which row should other columns (not grouped) come from?
\nIf your Oracle version is recent enough, you can use SUM - OVER() to show the SUM (grouped) against every data row.
\nSELECT \n IMPORTID,Site,Desk,Region,RefObligor,\n SUM(NOTIONAL) OVER(PARTITION BY IMPORTID, Region,RefObligor) AS SUM_NOTIONAL\nFrom \n Positions\nWhere\n ID = :importID\nOrder BY \n IMPORTID,Region,Site,Desk,RefObligor\n
\nAlternatively, you need to make an aggregate out of the Site, Desk columns
\nSELECT \n IMPORTID,Region,Min(Site) Site, Min(Desk) Desk,RefObligor,SUM(NOTIONAL) AS SUM_NOTIONAL\nFrom \n Positions\nWhere\n ID = :importID\nGROUP BY \n IMPORTID, Region,RefObligor\nOrder BY \n IMPORTID, Region,Min(Site),Min(Desk),RefObligor\n
\n
soup wrap:
It does not make sense to include columns that are not part of the GROUP BY clause. Consider if you have a MIN(X), MAX(Y) in the SELECT clause, which row should other columns (not grouped) come from?
If your Oracle version is recent enough, you can use SUM - OVER() to show the SUM (grouped) against every data row.
SELECT
IMPORTID,Site,Desk,Region,RefObligor,
SUM(NOTIONAL) OVER(PARTITION BY IMPORTID, Region,RefObligor) AS SUM_NOTIONAL
From
Positions
Where
ID = :importID
Order BY
IMPORTID,Region,Site,Desk,RefObligor
Alternatively, you need to make an aggregate out of the Site, Desk columns
SELECT
IMPORTID,Region,Min(Site) Site, Min(Desk) Desk,RefObligor,SUM(NOTIONAL) AS SUM_NOTIONAL
From
Positions
Where
ID = :importID
GROUP BY
IMPORTID, Region,RefObligor
Order BY
IMPORTID, Region,Min(Site),Min(Desk),RefObligor
qid & accept id:
(4821831, 4822494)
query:
sql server: generate primary key based on counter and another column value
soup:
Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.
\nDisadvantages
\n\n- Single-row inserts only. You won't be doing any bulk inserts to your new customer table as you'll need to execute the stored procedure each time you want to insert a row.
\n- A certain amount of contention for the key generation table, hence a potential for blocking.
\n
\nOn the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...
\nFirst, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.
\ncreate table dbo.CustomerNumberGenerator\n(\n company varchar(8) not null ,\n curr_value int not null default(1) ,\n\n constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,\n\n)\n
\nSecond, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:
\n\n- Puts the company id into canonical form (e.g. uppercase and trimmed of leading/trailing whitespace).
\n- Inserts the row into the key generation table if it doesn't already exist (atomic operation).
\n- In a single, atomic operation (update statement), the current value of the counter for the specified company is fetched and then incremented.
\n- The customer number is then generated in the specified way and returned to the caller via a 1-row/1-column
SELECT statement. \n
\nHere you go:
\ncreate procedure dbo.GetNewCustomerNumber\n\n @company varchar(8)\n\nas\n\n set nocount on\n set ansi_nulls on\n set concat_null_yields_null on\n set xact_abort on\n\n declare\n @customer_number varchar(32)\n\n --\n -- put the supplied key in canonical form\n --\n set @company = ltrim(rtrim(upper(@company)))\n\n --\n -- if the name isn't already defined in the table, define it.\n --\n insert dbo.CustomerNumberGenerator ( company )\n select id = @company\n where not exists ( select *\n from dbo.CustomerNumberGenerator\n where company = @company\n )\n\n --\n -- now, an interlocked update to get the current value and increment the table\n --\n update CustomerNumberGenerator\n set @customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,\n curr_value = curr_value + 1\n where company = @company\n\n --\n -- return the new unique value to the caller\n --\n select customer_number = @customer_number\n return 0\n\ngo\n
\nThe reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.
\n
soup wrap:
Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.
Disadvantages
- Single-row inserts only. You won't be doing any bulk inserts to your new customer table as you'll need to execute the stored procedure each time you want to insert a row.
- A certain amount of contention for the key generation table, hence a potential for blocking.
On the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...
First, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.
create table dbo.CustomerNumberGenerator
(
company varchar(8) not null ,
curr_value int not null default(1) ,
constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,
)
Second, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:
- Puts the company id into canonical form (e.g. uppercase and trimmed of leading/trailing whitespace).
- Inserts the row into the key generation table if it doesn't already exist (atomic operation).
- In a single, atomic operation (update statement), the current value of the counter for the specified company is fetched and then incremented.
- The customer number is then generated in the specified way and returned to the caller via a 1-row/1-column
SELECT statement.
Here you go:
create procedure dbo.GetNewCustomerNumber
@company varchar(8)
as
set nocount on
set ansi_nulls on
set concat_null_yields_null on
set xact_abort on
declare
@customer_number varchar(32)
--
-- put the supplied key in canonical form
--
set @company = ltrim(rtrim(upper(@company)))
--
-- if the name isn't already defined in the table, define it.
--
insert dbo.CustomerNumberGenerator ( company )
select id = @company
where not exists ( select *
from dbo.CustomerNumberGenerator
where company = @company
)
--
-- now, an interlocked update to get the current value and increment the table
--
update CustomerNumberGenerator
set @customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,
curr_value = curr_value + 1
where company = @company
--
-- return the new unique value to the caller
--
select customer_number = @customer_number
return 0
go
The reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.
qid & accept id:
(4823208, 4823298)
query:
How do I select unique pairs of rows from a table at random?
soup:
select a.id, b.id\nfrom people1 a\ninner join people1 b on a.id < b.id\nwhere not exists (\n select *\n from pairs1 c\n where c.person_a_id = a.id\n and c.person_b_id = b.id)\norder by a.id * rand()\nlimit 1;\n
\nLimit 1 returns just one pair if you are "drawing lots" one at a time. Otherwise, up the limit to however many pairs you need.
\nThe above query assumes that you can get
\n1 - 2\n2 - 7\n
\nand that the pairing 2 - 7 is valid since it doesn't exist, even if 2 is featured again. If you only want a person to feature in only one pair ever, then
\nselect a.id, b.id\nfrom people1 a\ninner join people1 b on a.id < b.id\nwhere not exists (\n select *\n from pairs1 c\n where c.person_a_id in (a.id, b.id))\n and not exists (\n select *\n from pairs1 c\n where c.person_b_id in (a.id, b.id))\norder by a.id * rand()\nlimit 1;\n
\nIf multiple pairs are to be generated in one single query, AND the destination table is still empty, you could use this single query. Take note that LIMIT 6 returns only 3 pairs.
\nselect min(a) a, min(b) b\nfrom\n(\n select\n case when mod(@p,2) = 1 then id end a,\n case when mod(@p,2) = 0 then id end b,\n @p:=@p+1 grp\n from (\n select id\n from (select @p:=1) p, people1\n order by rand()\n limit 6\n ) x\n) y\ngroup by floor(grp/2)\n
\n
soup wrap:
select a.id, b.id
from people1 a
inner join people1 b on a.id < b.id
where not exists (
select *
from pairs1 c
where c.person_a_id = a.id
and c.person_b_id = b.id)
order by a.id * rand()
limit 1;
Limit 1 returns just one pair if you are "drawing lots" one at a time. Otherwise, up the limit to however many pairs you need.
The above query assumes that you can get
1 - 2
2 - 7
and that the pairing 2 - 7 is valid since it doesn't exist, even if 2 is featured again. If you only want a person to feature in only one pair ever, then
select a.id, b.id
from people1 a
inner join people1 b on a.id < b.id
where not exists (
select *
from pairs1 c
where c.person_a_id in (a.id, b.id))
and not exists (
select *
from pairs1 c
where c.person_b_id in (a.id, b.id))
order by a.id * rand()
limit 1;
If multiple pairs are to be generated in one single query, AND the destination table is still empty, you could use this single query. Take note that LIMIT 6 returns only 3 pairs.
select min(a) a, min(b) b
from
(
select
case when mod(@p,2) = 1 then id end a,
case when mod(@p,2) = 0 then id end b,
@p:=@p+1 grp
from (
select id
from (select @p:=1) p, people1
order by rand()
limit 6
) x
) y
group by floor(grp/2)
qid & accept id:
(4841038, 4841062)
query:
Force MySQL to use two indexes on a Join
soup:
See MySQL Docs for FORCE INDEX.
\nJOIN survey_customer_similarity AS scs \nFORCE INDEX (CONSUMER_ID_1,CONSUMER_ID_2)\nON\ncr.CONSUMER_ID=scs.CONSUMER_ID_2 \nAND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_1 \nOR cr.CONSUMER_ID=scs.CONSUMER_ID_1 \nAND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_2\n
\nAs TheScrumMeister has pointed out below, it depends on your data, whether two indexes can actually be used at once.\n
\nHere's an example where you need to force the table to appear twice to control the query execution and intersection.\n\nUse this to create a table with >100K records, with roughly 1K rows matching the filter i in (2,3) and 1K rows matching j in (2,3):
\ndrop table if exists t1;\ncreate table t1 (id int auto_increment primary key, i int, j int);\ncreate index ix_t1_on_i on t1(i);\ncreate index ix_t1_on_j on t1(j);\ninsert into t1 (i,j) values (2,2),(2,3),(4,5),(6,6),(2,6),(2,7),(3,2);\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i, j from t1;\ninsert into t1 (i,j) select i, j from t1;\ninsert into t1 (i,j) select 2, j from t1 where not j in (2,3) limit 1000;\ninsert into t1 (i,j) select i, 3 from t1 where not i in (2,3) limit 1000;\n
\nWhen doing:
\nselect t.* from t1 as t where t.i=2 and t.j=3 or t.i=3 and t.j=2\n
\nyou get exactly 8 matches:
\n+-------+------+------+\n| id | i | j |\n+-------+------+------+\n| 7 | 3 | 2 |\n| 28679 | 3 | 2 |\n| 57351 | 3 | 2 |\n| 86023 | 3 | 2 |\n| 2 | 2 | 3 |\n| 28674 | 2 | 3 |\n| 57346 | 2 | 3 |\n| 86018 | 2 | 3 |\n+-------+------+------+\n
\nUse EXPLAIN on the query above to get:
\nid | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra\n1 | SIMPLE | t | range | ix_t1_on_i,ix_t1_on_j | ix_t1_on_j | 5 | NULL | 1012 | Using where\n
\nEven if we add FORCE INDEX to the query on two indexes EXPLAIN will return the exact same thing.
\nTo make it collect across two indexes, and then intersect them, use this:
\nselect t.* from t1 as a force index(ix_t1_on_i)\n\njoin t1 as b force index(ix_t1_on_j) on a.id=b.id\n\nwhere a.i=2 and b.j=3 or a.i=3 and b.j=2\n
\nUse that query with explain to get:
\nid | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra\n1 | SIMPLE | a | range | ix_t1_on_i | ix_t1_on_i | 5 | NULL | 1019 | Using where\n1 | SIMPLE | b | range | ix_t1_on_j | ix_t1_on_j | 5 | NULL | 1012 | Using where; Using index\n
\nThis proves that the indexes are being used. But that may or may not be faster depending on many other factors.
\n
soup wrap:
See MySQL Docs for FORCE INDEX.
JOIN survey_customer_similarity AS scs
FORCE INDEX (CONSUMER_ID_1,CONSUMER_ID_2)
ON
cr.CONSUMER_ID=scs.CONSUMER_ID_2
AND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_1
OR cr.CONSUMER_ID=scs.CONSUMER_ID_1
AND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_2
As TheScrumMeister has pointed out below, it depends on your data, whether two indexes can actually be used at once.
Here's an example where you need to force the table to appear twice to control the query execution and intersection.
Use this to create a table with >100K records, with roughly 1K rows matching the filter i in (2,3) and 1K rows matching j in (2,3):
drop table if exists t1;
create table t1 (id int auto_increment primary key, i int, j int);
create index ix_t1_on_i on t1(i);
create index ix_t1_on_j on t1(j);
insert into t1 (i,j) values (2,2),(2,3),(4,5),(6,6),(2,6),(2,7),(3,2);
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i, j from t1;
insert into t1 (i,j) select i, j from t1;
insert into t1 (i,j) select 2, j from t1 where not j in (2,3) limit 1000;
insert into t1 (i,j) select i, 3 from t1 where not i in (2,3) limit 1000;
When doing:
select t.* from t1 as t where t.i=2 and t.j=3 or t.i=3 and t.j=2
you get exactly 8 matches:
+-------+------+------+
| id | i | j |
+-------+------+------+
| 7 | 3 | 2 |
| 28679 | 3 | 2 |
| 57351 | 3 | 2 |
| 86023 | 3 | 2 |
| 2 | 2 | 3 |
| 28674 | 2 | 3 |
| 57346 | 2 | 3 |
| 86018 | 2 | 3 |
+-------+------+------+
Use EXPLAIN on the query above to get:
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
1 | SIMPLE | t | range | ix_t1_on_i,ix_t1_on_j | ix_t1_on_j | 5 | NULL | 1012 | Using where
Even if we add FORCE INDEX to the query on two indexes EXPLAIN will return the exact same thing.
To make it collect across two indexes, and then intersect them, use this:
select t.* from t1 as a force index(ix_t1_on_i)
join t1 as b force index(ix_t1_on_j) on a.id=b.id
where a.i=2 and b.j=3 or a.i=3 and b.j=2
Use that query with explain to get:
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
1 | SIMPLE | a | range | ix_t1_on_i | ix_t1_on_i | 5 | NULL | 1019 | Using where
1 | SIMPLE | b | range | ix_t1_on_j | ix_t1_on_j | 5 | NULL | 1012 | Using where; Using index
This proves that the indexes are being used. But that may or may not be faster depending on many other factors.
qid & accept id:
(4857837, 4857878)
query:
SQL query that can select n rows order by and then return m row
soup:
The literal interpretation would lead to
\nselect top 1000 from tbl order by columnname\n
\nAnd the next step to
\nSELECT TOP 100 FROM (select top 1000 from tbl order by columnname) SQ\n
\nBut that gives no different than a direct
\nselect top 100 from tbl order by columnname\n
\nUnless you are after 2 different orderings
\nSELECT TOP 100\nFROM (\n select top 1000 from tbl\n order by columnname) SQ\nORDER BY othercolumn\n
\nor switching between asc/desc
\nSELECT TOP 100\nFROM (\n select top 1000 from tbl\n order by columnname ASC) SQ\nORDER BY columnname DESC\n
\n
soup wrap:
The literal interpretation would lead to
select top 1000 from tbl order by columnname
And the next step to
SELECT TOP 100 FROM (select top 1000 from tbl order by columnname) SQ
But that gives no different than a direct
select top 100 from tbl order by columnname
Unless you are after 2 different orderings
SELECT TOP 100
FROM (
select top 1000 from tbl
order by columnname) SQ
ORDER BY othercolumn
or switching between asc/desc
SELECT TOP 100
FROM (
select top 1000 from tbl
order by columnname ASC) SQ
ORDER BY columnname DESC
qid & accept id:
(4866013, 4866299)
query:
Merging rows when counting - Django/SQL
soup:
Django/SQL solution as requested:
\nthe count of the different category_codes used:
\ncategory_codes_cnt = Item.objects.values('category_codes').distinct().count()\n
\ncount of the different unique_codes used:
\nunique_codes_cnt = Item.objects.values('unique_codes').distinct().count()\n
\ncount of the different combination of category_code and unique_code used:
\ncodes_cnt = Item.objects.values('category_codes', 'unique_codes').distinct().count()\n
\n
soup wrap:
Django/SQL solution as requested:
the count of the different category_codes used:
category_codes_cnt = Item.objects.values('category_codes').distinct().count()
count of the different unique_codes used:
unique_codes_cnt = Item.objects.values('unique_codes').distinct().count()
count of the different combination of category_code and unique_code used:
codes_cnt = Item.objects.values('category_codes', 'unique_codes').distinct().count()
qid & accept id:
(4890793, 4890867)
query:
MySQL database for hashes
soup:
Hash column should be a CHAR(32) as that is the length of the hash:
\nCREATE TABLE `hashes` (\n `id` INT NOT NULL AUTO_INCREMENT, \n `hash` CHAR(32), \n PRIMARY KEY (`id`)\n);\n\nmysql> describe hashes;\n+-------+----------+------+-----+---------+----------------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+----------+------+-----+---------+----------------+\n| id | int(11) | NO | PRI | NULL | auto_increment |\n| hash | char(32) | YES | | NULL | |\n+-------+----------+------+-----+---------+----------------+\n
\nIf you want to select from the table given user input:
\n-- Insert sample data:\nmysql> INSERT INTO `hashes` VALUES (null, MD5('hello'));\nQuery OK, 1 row affected (0.00 sec)\n\n-- Test retrieval:\nmysql> SELECT * FROM `hashes` WHERE `hash` = MD5('hello');\n+----+----------------------------------+\n| id | hash |\n+----+----------------------------------+\n| 1 | 5d41402abc4b2a76b9719d911017c592 |\n+----+----------------------------------+\n1 row in set (0.00 sec)\n
\nYou can add a key on hash for better performance.
\n
soup wrap:
Hash column should be a CHAR(32) as that is the length of the hash:
CREATE TABLE `hashes` (
`id` INT NOT NULL AUTO_INCREMENT,
`hash` CHAR(32),
PRIMARY KEY (`id`)
);
mysql> describe hashes;
+-------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| hash | char(32) | YES | | NULL | |
+-------+----------+------+-----+---------+----------------+
If you want to select from the table given user input:
-- Insert sample data:
mysql> INSERT INTO `hashes` VALUES (null, MD5('hello'));
Query OK, 1 row affected (0.00 sec)
-- Test retrieval:
mysql> SELECT * FROM `hashes` WHERE `hash` = MD5('hello');
+----+----------------------------------+
| id | hash |
+----+----------------------------------+
| 1 | 5d41402abc4b2a76b9719d911017c592 |
+----+----------------------------------+
1 row in set (0.00 sec)
You can add a key on hash for better performance.
qid & accept id:
(4914898, 4915305)
query:
Selecting a record based on integer being in an array field
soup:
If your formatting is EXACTLY
\nN1, N2 (e.g.) one comma and space between each N\n
\nThen use this WHERE clause
\nWHERE ', ' + AreaID + ',' LIKE '%, 53,%'\n
\nThe addition of the prefix and suffix makes every number, anywhere in the list, consistently wrapped by comma-space and suffixed by comma. Otherwise, you may get false positives with 53 appearing in part of another number.
\nNote
\n\n- A
LIKE expression will be anything but fast, since it will always scan the entire table. \n- You should consider normalizing the data into two tables:
\n
\nTables become
\nHouse\n+---------+----------------------+----------+\n| HouseID | HouseType | Description | Title |\n+---------+----------------------+----------+\n| 21 | B | data | data |\n| 23 | B | data | data |\n| 24 | B | data | data |\n| 23 | B | data | data |\n+---------+----------------------+----------+\n\nHouseArea\n+---------+-------\n| HouseID | AreaID\n+---------+-------\n| 21 | 17\n| 21 | 32\n| 21 | 53\n| 23 | 23\n| 23 | 73\n..etc\n
\nThen you can use
\nselect * from house h\nwhere exists (\n select *\n from housearea a\n where h.houseid=a.houseid and a.areaid=53)\n
\n
soup wrap:
If your formatting is EXACTLY
N1, N2 (e.g.) one comma and space between each N
Then use this WHERE clause
WHERE ', ' + AreaID + ',' LIKE '%, 53,%'
The addition of the prefix and suffix makes every number, anywhere in the list, consistently wrapped by comma-space and suffixed by comma. Otherwise, you may get false positives with 53 appearing in part of another number.
Note
- A
LIKE expression will be anything but fast, since it will always scan the entire table.
- You should consider normalizing the data into two tables:
Tables become
House
+---------+----------------------+----------+
| HouseID | HouseType | Description | Title |
+---------+----------------------+----------+
| 21 | B | data | data |
| 23 | B | data | data |
| 24 | B | data | data |
| 23 | B | data | data |
+---------+----------------------+----------+
HouseArea
+---------+-------
| HouseID | AreaID
+---------+-------
| 21 | 17
| 21 | 32
| 21 | 53
| 23 | 23
| 23 | 73
..etc
Then you can use
select * from house h
where exists (
select *
from housearea a
where h.houseid=a.houseid and a.areaid=53)
qid & accept id:
(4948269, 4948311)
query:
SQL : Test if a column has the "Not Null" property
soup:
Any particular RDBMS?
\nIn SQL Server
\nuse master\n\nSELECT COLUMNPROPERTY( OBJECT_ID('dbo.spt_values'),'number','AllowsNull')\n
\nOr (more standard)
\nselect IS_NULLABLE \nfrom INFORMATION_SCHEMA.COLUMNS \nwhere TABLE_SCHEMA='dbo' \n AND TABLE_NAME='spt_values' \n AND COLUMN_NAME='number'\n
\n
soup wrap:
Any particular RDBMS?
In SQL Server
use master
SELECT COLUMNPROPERTY( OBJECT_ID('dbo.spt_values'),'number','AllowsNull')
Or (more standard)
select IS_NULLABLE
from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA='dbo'
AND TABLE_NAME='spt_values'
AND COLUMN_NAME='number'
qid & accept id:
(4960337, 4960435)
query:
How find Customers who Bought Product A and D > 6 months apart?
soup:
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom Orders A\ninner join Orders B on B.CustID = A.CustID\n and B.ProdID = 312\n -- more than 6 months ago\n and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\nwhere A.ProdID = 105\n
\nThe above query is a simple interpretation of your requirement, where ANY purchase of A(105) and D(312) occurred 6 months apart. If the customer purchased
\n\n- A in Jan,
\n- A in March,
\n- A in July, and then purchased
\n- D in September
\n
\nit would return 2 rows for the customer (Jan and March), since both of those are followed by a D purchase more than 6 months later.
\nThe following query instead finds all cases where the LAST A purchase is 6 months or more before the FIRST D purchase.
\nselect A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom (\n select CustID, Max(InvoiceDate) InvoiceDate\n from Orders\n where ProdID = 105\n group by CustID) A\ninner join (\n select CustID, Min(InvoiceDate) InvoiceDate\n from Orders\n where ProdID = 312\n group by CustID) B on B.CustID = A.CustID\n -- more than 6 months ago\n and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\n
\nAnd if for the same scenario above, you don't want to see this customer because the A (Jul) and D (Sep) purchases are not 6 months apart, you can exclude them from the first query using an EXISTS filter.
\nselect A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom Orders A\ninner join Orders B on B.CustID = A.CustID\n and B.ProdID = 312\n -- more than 6 months ago\n and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\nwhere A.ProdID = 105\n AND NOT EXISTS (\n SELECT *\n FROM Orders C\n WHERE C.CustID=A.CustID\n AND C.InvoiceDate > A.InvoiceDate\n and C.InvoiceDate < B.InvoiceDate\n and C.ProdID in (105,312))\n
\n
soup wrap:
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from Orders A
inner join Orders B on B.CustID = A.CustID
and B.ProdID = 312
-- more than 6 months ago
and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)
where A.ProdID = 105
The above query is a simple interpretation of your requirement, where ANY purchase of A(105) and D(312) occurred 6 months apart. If the customer purchased
- A in Jan,
- A in March,
- A in July, and then purchased
- D in September
it would return 2 rows for the customer (Jan and March), since both of those are followed by a D purchase more than 6 months later.
The following query instead finds all cases where the LAST A purchase is 6 months or more before the FIRST D purchase.
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from (
select CustID, Max(InvoiceDate) InvoiceDate
from Orders
where ProdID = 105
group by CustID) A
inner join (
select CustID, Min(InvoiceDate) InvoiceDate
from Orders
where ProdID = 312
group by CustID) B on B.CustID = A.CustID
-- more than 6 months ago
and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)
And if for the same scenario above, you don't want to see this customer because the A (Jul) and D (Sep) purchases are not 6 months apart, you can exclude them from the first query using an EXISTS filter.
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from Orders A
inner join Orders B on B.CustID = A.CustID
and B.ProdID = 312
-- more than 6 months ago
and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)
where A.ProdID = 105
AND NOT EXISTS (
SELECT *
FROM Orders C
WHERE C.CustID=A.CustID
AND C.InvoiceDate > A.InvoiceDate
and C.InvoiceDate < B.InvoiceDate
and C.ProdID in (105,312))
qid & accept id:
(4971561, 4971605)
query:
traversing a tree upwards
soup:
Reverse your comparison!
\nSELECT * FROM reg WHERE tree='/20/1/1/1/1' OR '/20/1/1/1/1' LIKE CONCAT(tree, "/%");\n
\nGood luck
\n
\nmysql> create table temp_reg (tree varchar(255));\nQuery OK, 0 rows affected (0.01 sec)\n\nmysql> insert into temp_reg values ('/20/1/1/1/1'),('/30/1/1/1'),('/20/1');\nQuery OK, 3 rows affected (0.00 sec)\nRecords: 3 Duplicates: 0 Warnings: 0\n\nmysql> select * from temp_reg where '/20/1/1/1/1' LIKE CONCAT(tree, "%");\n+-------------+\n| tree |\n+-------------+\n| /20/1/1/1/1 |\n| /20/1 |\n+-------------+\n2 rows in set (0.00 sec)\n
\n
soup wrap:
Reverse your comparison!
SELECT * FROM reg WHERE tree='/20/1/1/1/1' OR '/20/1/1/1/1' LIKE CONCAT(tree, "/%");
Good luck
mysql> create table temp_reg (tree varchar(255));
Query OK, 0 rows affected (0.01 sec)
mysql> insert into temp_reg values ('/20/1/1/1/1'),('/30/1/1/1'),('/20/1');
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> select * from temp_reg where '/20/1/1/1/1' LIKE CONCAT(tree, "%");
+-------------+
| tree |
+-------------+
| /20/1/1/1/1 |
| /20/1 |
+-------------+
2 rows in set (0.00 sec)
qid & accept id:
(4986731, 4986748)
query:
How to select mysql rows in the order of IN clause
soup:
Use the FIND_IN_SET function:
\nSELECT e.* \n FROM EMPLOYEE e \n WHERE e.code in (1,3,2,4) \nORDER BY FIND_IN_SET(e.code, '1,3,2,4')\n
\nOr use a CASE statement:
\nSELECT e.* \n FROM EMPLOYEE e \n WHERE e.code in (1,3,2,4) \nORDER BY CASE e.code\n WHEN 1 THEN 1 \n WHEN 3 THEN 2\n WHEN 2 THEN 3\n WHEN 4 THEN 4\n END\n
\n
soup wrap:
Use the FIND_IN_SET function:
SELECT e.*
FROM EMPLOYEE e
WHERE e.code in (1,3,2,4)
ORDER BY FIND_IN_SET(e.code, '1,3,2,4')
Or use a CASE statement:
SELECT e.*
FROM EMPLOYEE e
WHERE e.code in (1,3,2,4)
ORDER BY CASE e.code
WHEN 1 THEN 1
WHEN 3 THEN 2
WHEN 2 THEN 3
WHEN 4 THEN 4
END
qid & accept id:
(5020149, 5020178)
query:
Limit SQL result by type (column value)
soup:
select * from daily_meal where type = 'fruit' limit 1\nunion\nselect * from daily_meal where type = 'vegetable'\n
\nexample
\nmysql> desc daily_meal;\n+-------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+--------------+------+-----+---------+-------+\n| name | varchar(100) | YES | | NULL | |\n| type | varchar(100) | YES | | NULL | |\n+-------+--------------+------+-----+---------+-------+\n2 rows in set (0.00 sec)\n\nmysql> select * from daily_meal;\n+----------+-----------+\n| name | type |\n+----------+-----------+\n| apple | fruit |\n| potato | vegetable |\n| eggplant | vegetable |\n| cucumber | vegetable |\n| lemon | fruit |\n| orange | fruit |\n| carrot | vegetable |\n+----------+-----------+\n7 rows in set (0.00 sec)\n\nmysql> select * from daily_meal where type = 'fruit' limit 1\n -> union\n -> select * from daily_meal where type = 'vegetable';\n+----------+-----------+\n| name | type |\n+----------+-----------+\n| apple | fruit |\n| potato | vegetable |\n| eggplant | vegetable |\n| cucumber | vegetable |\n| carrot | vegetable |\n+----------+-----------+\n5 rows in set (0.00 sec)\n
\n
soup wrap:
select * from daily_meal where type = 'fruit' limit 1
union
select * from daily_meal where type = 'vegetable'
example
mysql> desc daily_meal;
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| name | varchar(100) | YES | | NULL | |
| type | varchar(100) | YES | | NULL | |
+-------+--------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
mysql> select * from daily_meal;
+----------+-----------+
| name | type |
+----------+-----------+
| apple | fruit |
| potato | vegetable |
| eggplant | vegetable |
| cucumber | vegetable |
| lemon | fruit |
| orange | fruit |
| carrot | vegetable |
+----------+-----------+
7 rows in set (0.00 sec)
mysql> select * from daily_meal where type = 'fruit' limit 1
-> union
-> select * from daily_meal where type = 'vegetable';
+----------+-----------+
| name | type |
+----------+-----------+
| apple | fruit |
| potato | vegetable |
| eggplant | vegetable |
| cucumber | vegetable |
| carrot | vegetable |
+----------+-----------+
5 rows in set (0.00 sec)
qid & accept id:
(5081080, 5081132)
query:
Oracle Pl/SQl: custom function with intermediate results
soup:
If I understand correctly you just need to define the "var" variable...
\ncreate or replace FUNCTION EXAMPLE (param IN VARCHAR2)\nRETURN NUMBER\nAS\n var VARCHAR2(100); -- This datatype may need modification\nBEGIN\n select \n into var\n from dual;\n\n return to_number();\nEND EXAMPLE ;\n
\nDepending on exactly what you're doing, there may be a better approach that doesn't need the SELECT ... FROM DUAL:
\ncreate or replace FUNCTION EXAMPLE (param IN VARCHAR2)\nRETURN NUMBER\nAS\n var VARCHAR2(100); -- This datatype may need modification\nBEGIN\n var := ;\n\n return to_number();\nEND EXAMPLE ;\n
\n
soup wrap:
If I understand correctly you just need to define the "var" variable...
create or replace FUNCTION EXAMPLE (param IN VARCHAR2)
RETURN NUMBER
AS
var VARCHAR2(100); -- This datatype may need modification
BEGIN
select
into var
from dual;
return to_number();
END EXAMPLE ;
Depending on exactly what you're doing, there may be a better approach that doesn't need the SELECT ... FROM DUAL:
create or replace FUNCTION EXAMPLE (param IN VARCHAR2)
RETURN NUMBER
AS
var VARCHAR2(100); -- This datatype may need modification
BEGIN
var := ;
return to_number();
END EXAMPLE ;
qid & accept id:
(5087616, 5087839)
query:
Dynamically get the maximum and minimum allowable value for a number column?
soup:
It seems that you want the records whose value for money = 0 appear last.
\nIf this is the case you would go by such an order clause:
\norder by \ncase when money = 0 then 0\n else 1 \nend desc,\nmoney desc\n
\nWith a working example, that would be
\ncreate table tq84_order_by (\n txt varchar2(10),\n money number not null\n);\n\ninsert into tq84_order_by values ('aaa', 0);\ninsert into tq84_order_by values ('bbb', 2);\ninsert into tq84_order_by values ('ccc',-3);\ninsert into tq84_order_by values ('ddd', 4);\ninsert into tq84_order_by values ('eee', 1);\n\nselect * from tq84_order_by\norder by \ncase when money = 0 then 0\n else 1 \n end desc,\n money desc;\n
\nresulting in
\nTXT MONEY\n---------- ----------\nddd 4\nbbb 2\neee 1\nccc -3\naaa 0 \n
\n
soup wrap:
It seems that you want the records whose value for money = 0 appear last.
If this is the case you would go by such an order clause:
order by
case when money = 0 then 0
else 1
end desc,
money desc
With a working example, that would be
create table tq84_order_by (
txt varchar2(10),
money number not null
);
insert into tq84_order_by values ('aaa', 0);
insert into tq84_order_by values ('bbb', 2);
insert into tq84_order_by values ('ccc',-3);
insert into tq84_order_by values ('ddd', 4);
insert into tq84_order_by values ('eee', 1);
select * from tq84_order_by
order by
case when money = 0 then 0
else 1
end desc,
money desc;
resulting in
TXT MONEY
---------- ----------
ddd 4
bbb 2
eee 1
ccc -3
aaa 0
qid & accept id:
(5093557, 5671243)
query:
Updating intersection tables, alternative to delete->insert
soup:
Let's say the table starts like this.
\norder_accessories\nPK_refno PK_acc\n1 73\n1 74\n1 75\n1 86\n1 92\n
\nLet's also say that 75 is supposed to be 76. Assuming a sane user interface, the user can just change 75 to 76. A sane user interface would send this statement to the dbms.
\nupdate order_accessories\nset PK_acc = 76\nwhere (PK_refno = 1 and PK_acc = 75);\n
\nIf 75 were not supposed to be there in the first place, then the user would just delete that one row. A sane user interface would send this statement to the dbms.
\ndelete from order_accessories\nwhere (PK_refno = 1 and PK_acc = 75);\n
\n
soup wrap:
Let's say the table starts like this.
order_accessories
PK_refno PK_acc
1 73
1 74
1 75
1 86
1 92
Let's also say that 75 is supposed to be 76. Assuming a sane user interface, the user can just change 75 to 76. A sane user interface would send this statement to the dbms.
update order_accessories
set PK_acc = 76
where (PK_refno = 1 and PK_acc = 75);
If 75 were not supposed to be there in the first place, then the user would just delete that one row. A sane user interface would send this statement to the dbms.
delete from order_accessories
where (PK_refno = 1 and PK_acc = 75);
qid & accept id:
(5111728, 5111823)
query:
sql select puzzle: remove children when parent is filtered out
soup:
ANSI compliant. Each specific DBMS may have a faster implementation
\nselect *\nfrom tbl\nwhere id in-- PARENTS of CHILDREN that match\n( select parent_id from tbl\n where values0 > 10 and has_children = 0)\nor id in -- ONE CHILD ONLY\n( select MIN(id) from tbl\n where values0 > 10 and has_children = 0\n group by parent_id)\nor id in -- PARENTS\n( select id from tbl\n where values0 > 10 and has_children = 1)\n
\nBetter written as a JOIN
\nselect t.*\nfrom \n( select parent_id as ID from tbl\n where values0 > 10 and has_children = 0\n UNION\n select MIN(id) from tbl\n where values0 > 10 and has_children = 0\n group by parent_id\n UNION\n select id from tbl\n where values0 > 10 and has_children = 1) X\njoin tbl t on X.ID = t.ID\n
\n
soup wrap:
ANSI compliant. Each specific DBMS may have a faster implementation
select *
from tbl
where id in-- PARENTS of CHILDREN that match
( select parent_id from tbl
where values0 > 10 and has_children = 0)
or id in -- ONE CHILD ONLY
( select MIN(id) from tbl
where values0 > 10 and has_children = 0
group by parent_id)
or id in -- PARENTS
( select id from tbl
where values0 > 10 and has_children = 1)
Better written as a JOIN
select t.*
from
( select parent_id as ID from tbl
where values0 > 10 and has_children = 0
UNION
select MIN(id) from tbl
where values0 > 10 and has_children = 0
group by parent_id
UNION
select id from tbl
where values0 > 10 and has_children = 1) X
join tbl t on X.ID = t.ID
qid & accept id:
(5171809, 5171866)
query:
Edit query based on parameters in SQL Reporting Services
soup:
There are two ways you could do it:
\n\n- Write multiple queries (one for each table), then switch among then based upon the parameter value
\n- Use dynamic SQL
\n
\nFor 1, you'd do something like this:
\nif @param = 'value'\n select Col1, Col2 from Table1\nelse\n select Col1, Col2 from Table2\n
\nFor 2, you'd do something like this:
\ndeclare @sql nvarchar(4000)\n\nselect @sql = 'select Col1, Col2 from' + (case when @param = 'value' then 'Table1' else 'Table2' end)\n\nsp_executesql @sql\n
\nWARNING: Be very careful of option 2. If option 1 is feasible, then it is the safer option, as dynamically constructing SQL based upon user-supplied values is always a dangerous affair. While this particular example doesn't use the parameter directly in the SQL, it would be very easy to write something that did, and thus very easy to find a way to exploit it.
\n
soup wrap:
There are two ways you could do it:
- Write multiple queries (one for each table), then switch among then based upon the parameter value
- Use dynamic SQL
For 1, you'd do something like this:
if @param = 'value'
select Col1, Col2 from Table1
else
select Col1, Col2 from Table2
For 2, you'd do something like this:
declare @sql nvarchar(4000)
select @sql = 'select Col1, Col2 from' + (case when @param = 'value' then 'Table1' else 'Table2' end)
sp_executesql @sql
WARNING: Be very careful of option 2. If option 1 is feasible, then it is the safer option, as dynamically constructing SQL based upon user-supplied values is always a dangerous affair. While this particular example doesn't use the parameter directly in the SQL, it would be very easy to write something that did, and thus very easy to find a way to exploit it.
qid & accept id:
(5182723, 5182773)
query:
SQL - searching database with the LIKE operator
soup:
If you're using SQL Server, have a look at SOUNDEX.
\nFor your example:
\nselect SOUNDEX('Dinosaurs'), SOUNDEX('Dinosores')\n
\nReturns identical values (D526) .
\nYou can also use DIFFERENCE function (on same link as soundex) that will compare levels of similarity (4 being the most similar, 0 being the least).
\nSELECT DIFFERENCE('Dinosaurs', 'Dinosores'); --returns 4\n
\nEdit:
\nAfter hunting around a bit for a multi-text option, it seems that this isn't all that easy. I would refer you to the link on the Fuzzt Logic answer provided by @Neil Knight (+1 to that, for me!).
\nThis stackoverflow article also details possible sources for implentations for Fuzzy Logic in TSQL. Once respondant also outlined Full text Indexing as a potential that you might want to investigate.
\n
soup wrap:
If you're using SQL Server, have a look at SOUNDEX.
For your example:
select SOUNDEX('Dinosaurs'), SOUNDEX('Dinosores')
Returns identical values (D526) .
You can also use DIFFERENCE function (on same link as soundex) that will compare levels of similarity (4 being the most similar, 0 being the least).
SELECT DIFFERENCE('Dinosaurs', 'Dinosores'); --returns 4
Edit:
After hunting around a bit for a multi-text option, it seems that this isn't all that easy. I would refer you to the link on the Fuzzt Logic answer provided by @Neil Knight (+1 to that, for me!).
This stackoverflow article also details possible sources for implentations for Fuzzy Logic in TSQL. Once respondant also outlined Full text Indexing as a potential that you might want to investigate.
qid & accept id:
(5282607, 5282641)
query:
How to Filter grouped query result set (SQL)
soup:
You can add condition that tells "This Code must have a row with CT" sa do a sub-query:
\nSELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ;\n
\nAnd to your first query add a filter to show only those records which have Code in previous subquery:
\n... AND Code IN (SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ) ...\n
\nThis will get rid of record Code 2, because 2 will no be in results from first query
\n
soup wrap:
You can add condition that tells "This Code must have a row with CT" sa do a sub-query:
SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ;
And to your first query add a filter to show only those records which have Code in previous subquery:
... AND Code IN (SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ) ...
This will get rid of record Code 2, because 2 will no be in results from first query
qid & accept id:
(5290418, 5290539)
query:
How to insert a row's primary key in to another one of its columns?
soup:
You can do it in a single call from php to mysql if you use a stored procedure:
\nExample calls
\ncall insert_employee('f00',32);\ncall insert_employee('bar',64);\n\n$sql = sprintf("call insert_employee('%s',%d)", $name, $age);\n
\nScript
\ndrop table if exists employees;\ncreate table employees\n(\nid int unsigned not null auto_increment primary key,\nname varchar(32) not null,\nage tinyint unsigned not null default 0,\npid int unsigned not null default 0\n)\nengine=innodb;\n\ndrop procedure if exists insert_employee;\n\ndelimiter #\n\ncreate procedure insert_employee\n(\nin p_name varchar(32),\nin p_age tinyint unsigned\n)\nbegin\n\ndeclare v_id int unsigned default 0;\n\n insert into employees(name, age) values (p_name, p_age);\n set v_id = last_insert_id();\n update employees set pid = v_id where id = v_id;\nend#\n\ndelimiter ;\n
\n
soup wrap:
You can do it in a single call from php to mysql if you use a stored procedure:
Example calls
call insert_employee('f00',32);
call insert_employee('bar',64);
$sql = sprintf("call insert_employee('%s',%d)", $name, $age);
Script
drop table if exists employees;
create table employees
(
id int unsigned not null auto_increment primary key,
name varchar(32) not null,
age tinyint unsigned not null default 0,
pid int unsigned not null default 0
)
engine=innodb;
drop procedure if exists insert_employee;
delimiter #
create procedure insert_employee
(
in p_name varchar(32),
in p_age tinyint unsigned
)
begin
declare v_id int unsigned default 0;
insert into employees(name, age) values (p_name, p_age);
set v_id = last_insert_id();
update employees set pid = v_id where id = v_id;
end#
delimiter ;
qid & accept id:
(5292145, 5292209)
query:
SQL query for maximum date
soup:
I think you want something like
\nSELECT E.UserID\n , E.EntryDate\n , (SELECT TOP 1 Detail\n FROM Status AS S\n WHERE S.UserID = E.UserID\n AND S.StatusDate <= E.EntryDate\n ORDER BY S.StatusDate DESC)\nFROM Entry AS E\n
\nIf your database doesn't support TOP or for performance reasons you would prefer to avoid the ORDER BY you could try something like:
\nSELECT E.UserID\n , E.EntryDate\n , (SELECT S1.Detail\n FROM Status AS S1\n WHERE S1.UserID = E.UserID\n AND S1.StatusDate = (SELECT MAX(S2.StatusDate)\n FROM Status AS S2\n WHERE S2.UserID = E.UserID\n AND S2.StatusDate <= E.EntryDate))\nFROM Entry AS E\n
\n
soup wrap:
I think you want something like
SELECT E.UserID
, E.EntryDate
, (SELECT TOP 1 Detail
FROM Status AS S
WHERE S.UserID = E.UserID
AND S.StatusDate <= E.EntryDate
ORDER BY S.StatusDate DESC)
FROM Entry AS E
If your database doesn't support TOP or for performance reasons you would prefer to avoid the ORDER BY you could try something like:
SELECT E.UserID
, E.EntryDate
, (SELECT S1.Detail
FROM Status AS S1
WHERE S1.UserID = E.UserID
AND S1.StatusDate = (SELECT MAX(S2.StatusDate)
FROM Status AS S2
WHERE S2.UserID = E.UserID
AND S2.StatusDate <= E.EntryDate))
FROM Entry AS E
qid & accept id:
(5331808, 5331839)
query:
How do I combine the results of two queries with ordering?
soup:
You can use UNION ALL to get rows from both tables:
\nSELECT id, article, author, tag, date FROM table1 WHERE tag = '1'\nUNION ALL\nSELECT id, article, author, tag, date FROM table2 WHERE tag = '3'\nORDER BY date\n
\n
\nYou may also want to consider restructuring your database so that instead of using two tables you use just a single table with a field to distinguish the type of each row. Then the query can simplify to:
\nSELECT id, article, author, tag, date\nFROM yourtable\nWHERE (tag, type) IN (('1','type1'), ('3','type2'))\nORDER BY date\n
\n
soup wrap:
You can use UNION ALL to get rows from both tables:
SELECT id, article, author, tag, date FROM table1 WHERE tag = '1'
UNION ALL
SELECT id, article, author, tag, date FROM table2 WHERE tag = '3'
ORDER BY date
You may also want to consider restructuring your database so that instead of using two tables you use just a single table with a field to distinguish the type of each row. Then the query can simplify to:
SELECT id, article, author, tag, date
FROM yourtable
WHERE (tag, type) IN (('1','type1'), ('3','type2'))
ORDER BY date
qid & accept id:
(5355585, 5355648)
query:
how to sort order of LEFT JOIN in SQL query?
soup:
Try using MAX with a GROUP BY.
\nSELECT u.userName, MAX(c.carPrice)\nFROM users u\n LEFT JOIN cars c ON u.id = c.belongsToUser\nWHERE u.id = 4;\nGROUP BY u.userName;\n
\n
\nFurther information on GROUP BY
\nThe group by clause is used to split the selected records into groups based on unique combinations of the group by columns. This then allows us to use aggregate functions (eg. MAX, MIN, SUM, AVG, ...) that will be applied to each group of records in turn. The database will return a single result record for each grouping.
\nFor example, if we have a set of records representing temperatures over time and location in a table like this:
\nLocation Time Temperature\n-------- ---- -----------\nLondon 12:00 10.0\nBristol 12:00 12.0\nGlasgow 12:00 5.0\nLondon 13:00 14.0\nBristol 13:00 13.0\nGlasgow 13:00 7.0\n...\n
\nThen if we want to find the maximum temperature by location, then we need to split the temperature records into groupings, where each record in a particular group has the same location. We then want to find the maximum temperature of each group. The query to do this would be as follows:
\nSELECT Location, MAX(Temperature)\nFROM Temperatures\nGROUP BY Location;\n
\n
soup wrap:
Try using MAX with a GROUP BY.
SELECT u.userName, MAX(c.carPrice)
FROM users u
LEFT JOIN cars c ON u.id = c.belongsToUser
WHERE u.id = 4;
GROUP BY u.userName;
Further information on GROUP BY
The group by clause is used to split the selected records into groups based on unique combinations of the group by columns. This then allows us to use aggregate functions (eg. MAX, MIN, SUM, AVG, ...) that will be applied to each group of records in turn. The database will return a single result record for each grouping.
For example, if we have a set of records representing temperatures over time and location in a table like this:
Location Time Temperature
-------- ---- -----------
London 12:00 10.0
Bristol 12:00 12.0
Glasgow 12:00 5.0
London 13:00 14.0
Bristol 13:00 13.0
Glasgow 13:00 7.0
...
Then if we want to find the maximum temperature by location, then we need to split the temperature records into groupings, where each record in a particular group has the same location. We then want to find the maximum temperature of each group. The query to do this would be as follows:
SELECT Location, MAX(Temperature)
FROM Temperatures
GROUP BY Location;
qid & accept id:
(5380843, 5380919)
query:
Polymorphic ORM database pattern
soup:
You're having difficulty finding it because it's not a real (in the sense of widely adopted and encouraged) database design pattern.
\nStay away from patterns like this. While ORM's make mapping database tables to types easier, tables are not types, and vice versa. While it's not clear what the model you've described is supposed to do, you should not have columns that serve as fake foreign keys to multiple tables (when I say "fake", I mean that you're storing a simple identifier value that corresponds to the primary key of another table, but you can't actually define the column as a foreign key).
\nModel your database to represent the data, model your objects to represent the process, and use your ORM and intermediate layers to do the translation; don't try to push the database into your code, and don't push your code into the database.
\nEdit in reponse to comment
\nYou're mixing database and OO terminology; while I'm not familiar with the syntax you're using to define that function, I'm assuming it's an instance function on the User type called getLocation that takes no parameters and returns a Location object. Databases don't support the concepts of instance (or any type-based) functions; relational databases can have user-defined functions, but these are simple procedural functions that take parameters and return either values or result sets. They do not correspond to particular tables or field in any way, other than the fact that you can use them within the body of the function.
\nThat being said, there are two questions to answer here: how to do what you've asked, and what might be a better solution.
\nFor what you've asked, it sounds like you have a supertype-subtype relationship, which is a standard database design pattern. In this case, you have a single supertype table that represents the parent:
\nLocation\n---------------\nLocationID (PK)\n...other common attributes\n
\n(Note here that I'm using LocationID for the sake of simplicity; you should have more specific and logical attributes to define the primary key, if possible)
\nThen you have one or more tables that define subtypes:
\nAddress\n-----------\nLocationID (PK, FK to Location)\n...address-specific attributes\n\nCountry\n-----------\nLocationID (PK, FK to Location)\n...country-specific attributes\n
\nIf a specific instance of Location can only be one of the subtypes, then you should add a discriminator value to the parent table (Location) that indicates which of the subtypes it corresponds to. You can use CHECK constraints to ensure that only valid values are in this field for a given row.
\nIn the end, though, it sounds like you might be better served with a hybrid approach. You're fundamentally representing two different types of locations, from what I can see:
\n\n- Coordinate-based locations (L&L)
\n- Municipal/Postal/Etc.-based locations (Country, City, Address), and each of these is simply a more specific version of the previous
\n
\nGiven this, a simple model would look like this:
\nLocation\n------------\nLocationID (PK)\nLocationType (non-nullable) ('C' for coordinate, 'P' for postal)\n\nLocationCoordinate\n------------------\nLocationID (PK; FK to Location)\nLatitude (non-nullable)\nLongitude (non-nullable)\n\nLocationPostal\n------------------\nLocationID (PK, FK to Location)\nCountry (non-nullable)\nCity (nullable)\nAddress (nullable)\n
\nNow the only problem that remains is that we have nullable columns. If you want to keep your queries simple but take (justified!) flak from people about leaving nullable columns, then you can leave it as-is. If you want to go to what most people would consider a better-designed database, you can move to 6NF for our two nullable columns. Doing this will also have the nice side-effect of giving us a little more control over how these fields are populated without having to do anything extra.
\nOur two nullable fields are City and Address. I am going to assume that having an Address without a City would be nonsense. In this case, we remove these two attributes from the LocationPostal table and create two more tables:
\nLocationPostalCity\n------------------\nLocationID (PK; FK to LocationPostal)\nCity (non-nullable)\n\nLocationPostalCityAddress\n-------------------------\nLocationID (PK; FK to LocationPostalCity)\nAddress (non-nullable)\n
\n
soup wrap:
You're having difficulty finding it because it's not a real (in the sense of widely adopted and encouraged) database design pattern.
Stay away from patterns like this. While ORM's make mapping database tables to types easier, tables are not types, and vice versa. While it's not clear what the model you've described is supposed to do, you should not have columns that serve as fake foreign keys to multiple tables (when I say "fake", I mean that you're storing a simple identifier value that corresponds to the primary key of another table, but you can't actually define the column as a foreign key).
Model your database to represent the data, model your objects to represent the process, and use your ORM and intermediate layers to do the translation; don't try to push the database into your code, and don't push your code into the database.
Edit in reponse to comment
You're mixing database and OO terminology; while I'm not familiar with the syntax you're using to define that function, I'm assuming it's an instance function on the User type called getLocation that takes no parameters and returns a Location object. Databases don't support the concepts of instance (or any type-based) functions; relational databases can have user-defined functions, but these are simple procedural functions that take parameters and return either values or result sets. They do not correspond to particular tables or field in any way, other than the fact that you can use them within the body of the function.
That being said, there are two questions to answer here: how to do what you've asked, and what might be a better solution.
For what you've asked, it sounds like you have a supertype-subtype relationship, which is a standard database design pattern. In this case, you have a single supertype table that represents the parent:
Location
---------------
LocationID (PK)
...other common attributes
(Note here that I'm using LocationID for the sake of simplicity; you should have more specific and logical attributes to define the primary key, if possible)
Then you have one or more tables that define subtypes:
Address
-----------
LocationID (PK, FK to Location)
...address-specific attributes
Country
-----------
LocationID (PK, FK to Location)
...country-specific attributes
If a specific instance of Location can only be one of the subtypes, then you should add a discriminator value to the parent table (Location) that indicates which of the subtypes it corresponds to. You can use CHECK constraints to ensure that only valid values are in this field for a given row.
In the end, though, it sounds like you might be better served with a hybrid approach. You're fundamentally representing two different types of locations, from what I can see:
- Coordinate-based locations (L&L)
- Municipal/Postal/Etc.-based locations (Country, City, Address), and each of these is simply a more specific version of the previous
Given this, a simple model would look like this:
Location
------------
LocationID (PK)
LocationType (non-nullable) ('C' for coordinate, 'P' for postal)
LocationCoordinate
------------------
LocationID (PK; FK to Location)
Latitude (non-nullable)
Longitude (non-nullable)
LocationPostal
------------------
LocationID (PK, FK to Location)
Country (non-nullable)
City (nullable)
Address (nullable)
Now the only problem that remains is that we have nullable columns. If you want to keep your queries simple but take (justified!) flak from people about leaving nullable columns, then you can leave it as-is. If you want to go to what most people would consider a better-designed database, you can move to 6NF for our two nullable columns. Doing this will also have the nice side-effect of giving us a little more control over how these fields are populated without having to do anything extra.
Our two nullable fields are City and Address. I am going to assume that having an Address without a City would be nonsense. In this case, we remove these two attributes from the LocationPostal table and create two more tables:
LocationPostalCity
------------------
LocationID (PK; FK to LocationPostal)
City (non-nullable)
LocationPostalCityAddress
-------------------------
LocationID (PK; FK to LocationPostalCity)
Address (non-nullable)
qid & accept id:
(5393244, 5393325)
query:
PLSQL read value from XML (Again)?
soup:
Do the same thing as in the answer you referenced, but change the XPath expression (second argument to XMLTYPE) from
\n'//SOAProxyResult'\n
\nto e.g.
\n'//t:ItemId/@Id'\n
\nor
\n'//t:ItemId/@ChangeKey'\n
\nThe third argument will need to declare the t namespace prefix:
\n'xmlns:t="foobarbaz"'\n
\nand of course your input XML will need to declare that namespace prefix too.
\n
soup wrap:
Do the same thing as in the answer you referenced, but change the XPath expression (second argument to XMLTYPE) from
'//SOAProxyResult'
to e.g.
'//t:ItemId/@Id'
or
'//t:ItemId/@ChangeKey'
The third argument will need to declare the t namespace prefix:
'xmlns:t="foobarbaz"'
and of course your input XML will need to declare that namespace prefix too.
qid & accept id:
(5410918, 5410950)
query:
composing a SQL query with a date offset
soup:
This will return tomorrow's data
\nWHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+1, 0)\nand ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+2, 0)\n
\nThis will return today's data
\nWHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+0, 0)\nand ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+1, 0)\n
\nSee also How Does Between Work With Dates In SQL Server?
\n
soup wrap:
This will return tomorrow's data
WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+1, 0)
and ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+2, 0)
This will return today's data
WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+0, 0)
and ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+1, 0)
See also How Does Between Work With Dates In SQL Server?
qid & accept id:
(5455914, 5456435)
query:
TRIGGER based on spatial data
soup:
This doesn't work?
\nCREATE TRIGGER trig_pano_raw BEFORE INSERT ON pano_raw\nFOR EACH ROW\nBEGIN\n SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );\nEND;\n
\nRegarding the Update trigger, note that
\n\n1st, it has to have a different name and
\n2nd, you may want to check which field is updated, like this:
\n
\nupdate trigger
\nDELIMITER $$\nCREATE TRIGGER trig_Update_pano_raw BEFORE UPDATE ON pano_raw\nFOR EACH ROW\nBEGIN\n IF ((NEW.lat != OLD.lat) OR (NEW.lng != OLD.lng))\n THEN\n SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );\n ELSEIF (NEW.latlng != OLD.latlng)\n THEN\n BEGIN\n SET NEW.lat = X(NEW.latlng);\n SET NEW.lng = Y(NEW.latlng);\n END;\n END IF;\nEND;$$\nDELIMITER ;\n
\n
soup wrap:
This doesn't work?
CREATE TRIGGER trig_pano_raw BEFORE INSERT ON pano_raw
FOR EACH ROW
BEGIN
SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );
END;
Regarding the Update trigger, note that
1st, it has to have a different name and
2nd, you may want to check which field is updated, like this:
update trigger
DELIMITER $$
CREATE TRIGGER trig_Update_pano_raw BEFORE UPDATE ON pano_raw
FOR EACH ROW
BEGIN
IF ((NEW.lat != OLD.lat) OR (NEW.lng != OLD.lng))
THEN
SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );
ELSEIF (NEW.latlng != OLD.latlng)
THEN
BEGIN
SET NEW.lat = X(NEW.latlng);
SET NEW.lng = Y(NEW.latlng);
END;
END IF;
END;$$
DELIMITER ;
qid & accept id:
(5462205, 5462250)
query:
MySQL SELECT function to sum current data
soup:
This is called cumulative sum.
\nIn Oracle and PostgreSQL, it is calculated using a window function:
\nSELECT id, val, SUM() OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)\nFROM mytable\n
\nHowever, MySQL does not support it.
\nIn MySQL, you can calculate it using session variables:
\nSET @s = 0;\n\nSELECT id, val, @s := @s + val\nFROM mytable\nORDER BY\n id\n;\n
\nor in a pure set-based but less efficient way:
\nSELECT t1.id, t1.val, SUM(t2.val)\nFROM mytable t1\nJOIN mytable t2\nON t2.id <= t1.id\nGROUP BY\n t1.id\n;\n
\n
soup wrap:
This is called cumulative sum.
In Oracle and PostgreSQL, it is calculated using a window function:
SELECT id, val, SUM() OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
FROM mytable
However, MySQL does not support it.
In MySQL, you can calculate it using session variables:
SET @s = 0;
SELECT id, val, @s := @s + val
FROM mytable
ORDER BY
id
;
or in a pure set-based but less efficient way:
SELECT t1.id, t1.val, SUM(t2.val)
FROM mytable t1
JOIN mytable t2
ON t2.id <= t1.id
GROUP BY
t1.id
;
qid & accept id:
(5479975, 5480412)
query:
query for a set in a relational database
soup:
I won't comment on whether there is a better suited schema for doing this (it's quite possible), but for a schema having columns name and item, the following query should work. (mysql syntax)
\nSELECT k.name\nFROM (SELECT DISTINCT name FROM sets) AS k\nINNER JOIN sets i1 ON (k.name = i1.name AND i1.item = 1)\nINNER JOIN sets i2 ON (k.name = i2.name AND i2.item = 3)\nINNER JOIN sets i3 ON (k.name = i3.name AND i3.item = 5)\nLEFT JOIN sets ix ON (k.name = ix.name AND ix.item NOT IN (1, 3, 5))\nWHERE ix.name IS NULL;\n
\nThe idea is that we have all the set keys in k, which we then join with the set item data in sets once for each set item in the set we are searching for, three in this case. Each of the three inner joins with table aliases i1, i2 and i3 filter out all set names that don't contain the item searched for with that join. Finally, we have a left join with sets with table alias ix, which brings in all the extra items in the set, that is, every item we were not searching for. ix.name is NULL in the case that no extra items are found, which is exactly what we want, thus the WHERE clause. The query returns a row containing the set key if the set is found, no rows otherwise.
\n
\nEdit: The idea behind collapsars answer seems to be much better than mine, so here's a bit shorter version of that with explanation.
\nSELECT sets.name\nFROM sets\nLEFT JOIN (\n SELECT DISTINCT name\n FROM sets\n WHERE item NOT IN (1, 3, 5)\n) s1\nON (sets.name = s1.name)\nWHERE s1.name IS NULL\nGROUP BY sets.name\nHAVING COUNT(sets.item) = 3;\n
\nThe idea here is that subquery s1 selects the keys of all sets that contain items other that the ones we are looking for. Thus, when we left join sets with s1, s1.name is NULL when the set only contains items we are searching for. We then group by set key and filter out any sets having the wrong number of items. We are then left with only sets which contain only items we are searching for and are of the correct length. Since sets can only contain an item once, there can only be one set satisfying that criteria, and that's the one we're looking for.
\n
\nEdit: It just dawned on me how to do this without the exclusion.
\nSELECT totals.name\nFROM (\n SELECT name, COUNT(*) count\n FROM sets\n GROUP BY name\n) totals\nINNER JOIN (\n SELECT name, COUNT(*) count\n FROM sets\n WHERE item IN (1, 3, 5)\n GROUP BY name\n) matches\nON (totals.name = matches.name)\nWHERE totals.count = 3 AND matches.count = 3;\n
\nThe first subquery finds the total count of items in each set and the second one finds out the count of matching items in each set. When matches.count is 3, the set has all the items we're looking for, and if totals.count is also 3, the set doesn't have any extra items.
\n
soup wrap:
I won't comment on whether there is a better suited schema for doing this (it's quite possible), but for a schema having columns name and item, the following query should work. (mysql syntax)
SELECT k.name
FROM (SELECT DISTINCT name FROM sets) AS k
INNER JOIN sets i1 ON (k.name = i1.name AND i1.item = 1)
INNER JOIN sets i2 ON (k.name = i2.name AND i2.item = 3)
INNER JOIN sets i3 ON (k.name = i3.name AND i3.item = 5)
LEFT JOIN sets ix ON (k.name = ix.name AND ix.item NOT IN (1, 3, 5))
WHERE ix.name IS NULL;
The idea is that we have all the set keys in k, which we then join with the set item data in sets once for each set item in the set we are searching for, three in this case. Each of the three inner joins with table aliases i1, i2 and i3 filter out all set names that don't contain the item searched for with that join. Finally, we have a left join with sets with table alias ix, which brings in all the extra items in the set, that is, every item we were not searching for. ix.name is NULL in the case that no extra items are found, which is exactly what we want, thus the WHERE clause. The query returns a row containing the set key if the set is found, no rows otherwise.
Edit: The idea behind collapsars answer seems to be much better than mine, so here's a bit shorter version of that with explanation.
SELECT sets.name
FROM sets
LEFT JOIN (
SELECT DISTINCT name
FROM sets
WHERE item NOT IN (1, 3, 5)
) s1
ON (sets.name = s1.name)
WHERE s1.name IS NULL
GROUP BY sets.name
HAVING COUNT(sets.item) = 3;
The idea here is that subquery s1 selects the keys of all sets that contain items other that the ones we are looking for. Thus, when we left join sets with s1, s1.name is NULL when the set only contains items we are searching for. We then group by set key and filter out any sets having the wrong number of items. We are then left with only sets which contain only items we are searching for and are of the correct length. Since sets can only contain an item once, there can only be one set satisfying that criteria, and that's the one we're looking for.
Edit: It just dawned on me how to do this without the exclusion.
SELECT totals.name
FROM (
SELECT name, COUNT(*) count
FROM sets
GROUP BY name
) totals
INNER JOIN (
SELECT name, COUNT(*) count
FROM sets
WHERE item IN (1, 3, 5)
GROUP BY name
) matches
ON (totals.name = matches.name)
WHERE totals.count = 3 AND matches.count = 3;
The first subquery finds the total count of items in each set and the second one finds out the count of matching items in each set. When matches.count is 3, the set has all the items we're looking for, and if totals.count is also 3, the set doesn't have any extra items.
qid & accept id:
(5501347, 5501454)
query:
Increment Oracle time in varchar field by a certain amount?
soup:
you could use the built-in date (and interval -- thanks Alex for the link) calculation:
\nto_char(to_date(:x, 'hh24:mi') + INTERVAL :y MINUTE,'hh24:mi')\n
\nfor instance:
\nSQL> WITH my_data AS (\n 2 SELECT '12:15' t FROM dual\n 3 UNION ALL SELECT '10:30' FROM dual\n 4 )\n 5 SELECT t, \n 6 to_char(to_date(t, 'hh24:mi') + INTERVAL '15' MINUTE,'hh24:mi')"t+15"\n 7 FROM my_data;\n\nT t+15\n----- -----\n12:15 12:30\n10:30 10:45\n
\n
soup wrap:
you could use the built-in date (and interval -- thanks Alex for the link) calculation:
to_char(to_date(:x, 'hh24:mi') + INTERVAL :y MINUTE,'hh24:mi')
for instance:
SQL> WITH my_data AS (
2 SELECT '12:15' t FROM dual
3 UNION ALL SELECT '10:30' FROM dual
4 )
5 SELECT t,
6 to_char(to_date(t, 'hh24:mi') + INTERVAL '15' MINUTE,'hh24:mi')"t+15"
7 FROM my_data;
T t+15
----- -----
12:15 12:30
10:30 10:45
qid & accept id:
(5606689, 5607174)
query:
SQL Server: Is there a way to check what is the resulting data type of implicit conversion?
soup:
The result of the expression is numeric (17,6). To see this
\nDECLARE @i INT, @v SQL_VARIANT\n\nSET @i = 3\nSET @v = @i / 9.0\n\nSELECT\n CAST(SQL_VARIANT_PROPERTY(@v, 'BaseType') AS VARCHAR(30)) AS BaseType,\n CAST(SQL_VARIANT_PROPERTY(@v, 'Precision') AS INT) AS Precision,\n CAST(SQL_VARIANT_PROPERTY(@v, 'Scale') AS INT) AS Scale\n
\nReturns
\nBaseType Precision Scale\n---------- ----------- -----------\nnumeric 17 6\n
\nEdit:
\nSELECT SQL_VARIANT_PROPERTY(9.0, 'BaseType'),\n SQL_VARIANT_PROPERTY(9.0, 'Precision'),\n SQL_VARIANT_PROPERTY(9.0, 'Scale')\n
\nSo the literal 9.0 is treated as numeric(2,1) (Can be seen from the query above)
\n@i is numeric(10,0) (as per Mikael's answer)
\nThe rules that govern why numeric(10,0)/numeric(2,1) gives numeric (17,6) are covered here
\nOperation: e1 / e2\nResult precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)\nResult scale: max(6, s1 + p2 + 1)\n
\nSubstituting the relevant values in gives
\n10 - 0 + 1 + max(6, 0 + 2 + 1) = 17\nmax(6, 0 + 2 + 1) = 6 \n
\n
soup wrap:
The result of the expression is numeric (17,6). To see this
DECLARE @i INT, @v SQL_VARIANT
SET @i = 3
SET @v = @i / 9.0
SELECT
CAST(SQL_VARIANT_PROPERTY(@v, 'BaseType') AS VARCHAR(30)) AS BaseType,
CAST(SQL_VARIANT_PROPERTY(@v, 'Precision') AS INT) AS Precision,
CAST(SQL_VARIANT_PROPERTY(@v, 'Scale') AS INT) AS Scale
Returns
BaseType Precision Scale
---------- ----------- -----------
numeric 17 6
Edit:
SELECT SQL_VARIANT_PROPERTY(9.0, 'BaseType'),
SQL_VARIANT_PROPERTY(9.0, 'Precision'),
SQL_VARIANT_PROPERTY(9.0, 'Scale')
So the literal 9.0 is treated as numeric(2,1) (Can be seen from the query above)
@i is numeric(10,0) (as per Mikael's answer)
The rules that govern why numeric(10,0)/numeric(2,1) gives numeric (17,6) are covered here
Operation: e1 / e2
Result precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Result scale: max(6, s1 + p2 + 1)
Substituting the relevant values in gives
10 - 0 + 1 + max(6, 0 + 2 + 1) = 17
max(6, 0 + 2 + 1) = 6
qid & accept id:
(5719384, 5732491)
query:
Insert line into a query result (sum)
soup:
Thanks for everyone's feedback/help, it at least got me thinking of different approaches. I came up with something that doesn't depend on what version of SQL Server I'm using (our vendor changes versions often so I have to be as cross-compliant as possible).
\nThis might be considered a hack (ok, it is a hack) but it works, and it gets the job done:
\nSELECT company\n , product\n , price\nFROM companyMaster\nORDER BY company,\n , product,\n , price\n\nUNION\n\nSELECT company + 'Total'\n , ''\n , SUM(price)\nFROM companyMaster\nGROUP BY company\n\nORDER BY company;\n
\nThis solution basically uses the UNION of two select statements. The first is exactly like the orginal, the second produces the sum line I needed. In order to correctly locate the sum line, I did a string concatenation on the company name (appending the word 'Total'), so that when I sort alphabetically on company name, the Total row will show up at the bottom of each company section.
\nHere's what the final report looks like (not exactly what I wanted but functionally equivalent, just not very pretty to look at:
\nCompanyA Product 7 14.99 \nCompanyA Product 3 45.95\nCompanyA Product 4 12.00\nCompanyA Total 72.94\nCompanyB Product 3 45.95\nCompanyB Total 45.95\nCompanyC Product 7 14.99\nCompanyC Product 3 45.95\nCompanyC Total 60.94\n
\n
soup wrap:
Thanks for everyone's feedback/help, it at least got me thinking of different approaches. I came up with something that doesn't depend on what version of SQL Server I'm using (our vendor changes versions often so I have to be as cross-compliant as possible).
This might be considered a hack (ok, it is a hack) but it works, and it gets the job done:
SELECT company
, product
, price
FROM companyMaster
ORDER BY company,
, product,
, price
UNION
SELECT company + 'Total'
, ''
, SUM(price)
FROM companyMaster
GROUP BY company
ORDER BY company;
This solution basically uses the UNION of two select statements. The first is exactly like the orginal, the second produces the sum line I needed. In order to correctly locate the sum line, I did a string concatenation on the company name (appending the word 'Total'), so that when I sort alphabetically on company name, the Total row will show up at the bottom of each company section.
Here's what the final report looks like (not exactly what I wanted but functionally equivalent, just not very pretty to look at:
CompanyA Product 7 14.99
CompanyA Product 3 45.95
CompanyA Product 4 12.00
CompanyA Total 72.94
CompanyB Product 3 45.95
CompanyB Total 45.95
CompanyC Product 7 14.99
CompanyC Product 3 45.95
CompanyC Total 60.94
qid & accept id:
(5760020, 5760379)
query:
ORACLE/SQL - Joining 3 tables that aren't all interconnected
soup:
I know this is a matter of style but in my opinion ansi style joins make this much clearer:
\nSELECT c.*\nFROM c\nJOIN a ON a.model = c.model\nJOIN b on b.type = a.type\n
\nIn case you have multiple matching elements in a or b, this query will return duplicates. You can either add a DISTINCT or rewrite it as an EXISTS query:
\nSELECT *\nFROM c\nWHERE EXISTS (SELECT 1\n FROM a\n JOIN b ON b.type = a.type\n WHERE a.model = c.model)\n
\nI think this should also give the same result, as long as there are no NULL values in model:
\nSELECT *\nFROM c\nWHERE c.model IN (SELECT a.model\n FROM a\n JOIN b ON b.type = a.type)\n
\n
soup wrap:
I know this is a matter of style but in my opinion ansi style joins make this much clearer:
SELECT c.*
FROM c
JOIN a ON a.model = c.model
JOIN b on b.type = a.type
In case you have multiple matching elements in a or b, this query will return duplicates. You can either add a DISTINCT or rewrite it as an EXISTS query:
SELECT *
FROM c
WHERE EXISTS (SELECT 1
FROM a
JOIN b ON b.type = a.type
WHERE a.model = c.model)
I think this should also give the same result, as long as there are no NULL values in model:
SELECT *
FROM c
WHERE c.model IN (SELECT a.model
FROM a
JOIN b ON b.type = a.type)
qid & accept id:
(5795541, 5795767)
query:
sql query: no payments in last 90 days
soup:
Ensure there's an index on payments(client_id), or even better, payments(client_id, created_at).
\nFor alternative way to write your query, you could try a not exists, like:
\nselect *\nfrom clients c\nwhere not exists\n (\n select *\n from payments p\n where p.payments.client_id = clients.id\n and payments.created_at > utc_timestamp() - interval 90 day\n )\n
\nOr an exclusive left join:
\nselect *\nfrom clients c\nleft join\n payments p\non p.payments.client_id = clients.id\n and payments.created_at > utc_timestamp() - interval 90 day\nwhere p.client_id is null\n
\nIf both are slow, add the explain extended output to your question, so we can see why.
\n
soup wrap:
Ensure there's an index on payments(client_id), or even better, payments(client_id, created_at).
For alternative way to write your query, you could try a not exists, like:
select *
from clients c
where not exists
(
select *
from payments p
where p.payments.client_id = clients.id
and payments.created_at > utc_timestamp() - interval 90 day
)
Or an exclusive left join:
select *
from clients c
left join
payments p
on p.payments.client_id = clients.id
and payments.created_at > utc_timestamp() - interval 90 day
where p.client_id is null
If both are slow, add the explain extended output to your question, so we can see why.
qid & accept id:
(5803133, 5803148)
query:
How to use SELECT INTO with static values included?
soup:
SELECT foo.id, 'R' AS type INTO bar FROM foo;\n
\nIn MySQL this would normally be done with:
\nLazy with no indexes
\nCREATE TABLE bar SELECT id, 'R' AS type FROM foo;\n
\nNicer way (assuming you've created table bar already)
\nINSERT INTO bar SELECT id, 'R' AS type FROM foo;\n
\n
soup wrap:
SELECT foo.id, 'R' AS type INTO bar FROM foo;
In MySQL this would normally be done with:
Lazy with no indexes
CREATE TABLE bar SELECT id, 'R' AS type FROM foo;
Nicer way (assuming you've created table bar already)
INSERT INTO bar SELECT id, 'R' AS type FROM foo;
qid & accept id:
(5816567, 5816696)
query:
select n rows in sql
soup:
SELECT *\nFROM (\n SELECT country, capitol, rownum as rn\n FROM your_table\n ORDER BY country\n) \nWHERE rn > 1\n
\nIf the "first one" is not defined through sorting by country, then you need to apply a different ORDER BY in the inner query.
\nEdit
\nFor completeness, the ANSI SQL solution to this would be:
\nSELECT *\nFROM (\n SELECT country, \n capitol, \n row_number() over (order by country) as rn\n FROM your_table\n) \nWHERE rn > 1\n
\nThat is a portable solution that works on almost all major DBMS
\n
soup wrap:
SELECT *
FROM (
SELECT country, capitol, rownum as rn
FROM your_table
ORDER BY country
)
WHERE rn > 1
If the "first one" is not defined through sorting by country, then you need to apply a different ORDER BY in the inner query.
Edit
For completeness, the ANSI SQL solution to this would be:
SELECT *
FROM (
SELECT country,
capitol,
row_number() over (order by country) as rn
FROM your_table
)
WHERE rn > 1
That is a portable solution that works on almost all major DBMS
qid & accept id:
(5943678, 6862842)
query:
MySQL - How to pivot NVP?
soup:
thats a pretty standard implementation
\nSELECT\nproduct_id,\nGROUP_CONCAT(if(name = 'Author', value, NULL)) AS 'Author',\nGROUP_CONCAT(if(name = 'Publisher', value, NULL)) AS 'Publisher',\nFROM product_attribute\nGROUP BY product_id; \n
\nyou have to
\nselect distinct(name) from product_attribute\n
\nso you can build the above query \nbut NO it will not work with identical names , GROUP_CONCAT will concat the values .
\ni ve seen an implementation which adds a column and populates it with increment values so that it can then pivot the table using variables and a counter. but i dont have that in mysql
\n
soup wrap:
thats a pretty standard implementation
SELECT
product_id,
GROUP_CONCAT(if(name = 'Author', value, NULL)) AS 'Author',
GROUP_CONCAT(if(name = 'Publisher', value, NULL)) AS 'Publisher',
FROM product_attribute
GROUP BY product_id;
you have to
select distinct(name) from product_attribute
so you can build the above query
but NO it will not work with identical names , GROUP_CONCAT will concat the values .
i ve seen an implementation which adds a column and populates it with increment values so that it can then pivot the table using variables and a counter. but i dont have that in mysql
qid & accept id:
(5992308, 6072328)
query:
How to create a function in DB2 that returns the value of a sequence?
soup:
CREATE FUNCTION "MYSCHEMA"."MY_FUNC"(PARAM1 VARCHAR(4000))\n RETURNS INT\nSPECIFIC SQL110520140321900 BEGIN ATOMIC\n DECLARE VAR1 INT;\n DECLARE VAR2 INT;\n SET VAR1 = NEXTVAL FOR MY_SEQ;\n SET VAR2 = VAR1 + 2000; --or whatever magic you want to do\n RETURN VAR2;\nEND\n
\nTo try it out:
\nSELECT MY_FUNC('aa') FROM SYSIBM.SYSDUMMY1;\n
\n
soup wrap:
CREATE FUNCTION "MYSCHEMA"."MY_FUNC"(PARAM1 VARCHAR(4000))
RETURNS INT
SPECIFIC SQL110520140321900 BEGIN ATOMIC
DECLARE VAR1 INT;
DECLARE VAR2 INT;
SET VAR1 = NEXTVAL FOR MY_SEQ;
SET VAR2 = VAR1 + 2000; --or whatever magic you want to do
RETURN VAR2;
END
To try it out:
SELECT MY_FUNC('aa') FROM SYSIBM.SYSDUMMY1;
qid & accept id:
(6031181, 6032080)
query:
Find conflicted date intervals using SQL
soup:
declare @T table (ItemId int, IntervalID int, StartDate datetime, EndDate datetime)\n\ninsert into @T\nselect 1, 1, NULL, '2011-01-15' union all\nselect 2, 1, '2011-01-16', '2011-01-25' union all\nselect 3, 1, '2011-01-26', NULL union all\nselect 4, 2, NULL, '2011-01-17' union all\nselect 5, 2, '2011-01-16', '2011-01-25' union all\nselect 6, 2, '2011-01-26', NULL\n\nselect T1.*\nfrom @T as T1\n inner join @T as T2\n on coalesce(T1.StartDate, '1753-01-01') < coalesce(T2.EndDate, '9999-12-31') and\n coalesce(T1.EndDate, '9999-12-31') > coalesce(T2.StartDate, '1753-01-01') and\n T1.IntervalID = T2.IntervalID and\n T1.ItemId <> T2.ItemId\n
\nResult:
\nItemId IntervalID StartDate EndDate\n----------- ----------- ----------------------- -----------------------\n5 2 2011-01-16 00:00:00.000 2011-01-25 00:00:00.000\n4 2 NULL 2011-01-17 00:00:00.000\n
\n
soup wrap:
declare @T table (ItemId int, IntervalID int, StartDate datetime, EndDate datetime)
insert into @T
select 1, 1, NULL, '2011-01-15' union all
select 2, 1, '2011-01-16', '2011-01-25' union all
select 3, 1, '2011-01-26', NULL union all
select 4, 2, NULL, '2011-01-17' union all
select 5, 2, '2011-01-16', '2011-01-25' union all
select 6, 2, '2011-01-26', NULL
select T1.*
from @T as T1
inner join @T as T2
on coalesce(T1.StartDate, '1753-01-01') < coalesce(T2.EndDate, '9999-12-31') and
coalesce(T1.EndDate, '9999-12-31') > coalesce(T2.StartDate, '1753-01-01') and
T1.IntervalID = T2.IntervalID and
T1.ItemId <> T2.ItemId
Result:
ItemId IntervalID StartDate EndDate
----------- ----------- ----------------------- -----------------------
5 2 2011-01-16 00:00:00.000 2011-01-25 00:00:00.000
4 2 NULL 2011-01-17 00:00:00.000
qid & accept id:
(6057352, 6057388)
query:
Find duplicates in SQL
soup:
A grouping on SSN should do it
\n
\nSELECT\n ssn\nFROM\n Table t1\nGROUP BY\n ssn\nHAVING COUNT(*) > 1\n
\n..or if you have many rows per ssn and only want to find duplicate names)
\n...\nHAVING COUNT(DISTINCT name) > 1 \n
\n\nEdit, oops, misunderstood
\nSELECT\n ssn\nFROM\n Table t1\nGROUP BY\n ssn\nHAVING MIN(name) <> MAX(name)\n
\n
soup wrap:
A grouping on SSN should do it
SELECT
ssn
FROM
Table t1
GROUP BY
ssn
HAVING COUNT(*) > 1
..or if you have many rows per ssn and only want to find duplicate names)
...
HAVING COUNT(DISTINCT name) > 1
Edit, oops, misunderstood
SELECT
ssn
FROM
Table t1
GROUP BY
ssn
HAVING MIN(name) <> MAX(name)
qid & accept id:
(6070894, 6071196)
query:
Detect overlapping ranges and correct then in oracle
soup:
Analytic functions could help:
\nselect userid, map\n, case when prevend >= startday then prevend+1 else startday end newstart\n, endday\nfrom\n( select userid, map, startday, endday\n , lag(endday) over (partition by userid order by startday) prevend\n from mytable\n)\norder by userid, startday\n
\nGives:
\nUSERID MAP NEWSTART ENDDAY\n1 A 01/01/2011 01/05/2011\n1 B 01/06/2011 01/10/2011\n2 A 01/01/2011 01/07/2011\n2 B 01/08/2011 01/10/2011\n
\n
soup wrap:
Analytic functions could help:
select userid, map
, case when prevend >= startday then prevend+1 else startday end newstart
, endday
from
( select userid, map, startday, endday
, lag(endday) over (partition by userid order by startday) prevend
from mytable
)
order by userid, startday
Gives:
USERID MAP NEWSTART ENDDAY
1 A 01/01/2011 01/05/2011
1 B 01/06/2011 01/10/2011
2 A 01/01/2011 01/07/2011
2 B 01/08/2011 01/10/2011
qid & accept id:
(6093085, 6098034)
query:
Mapping values without a table
soup:
Use a Common Table Expression (CTE) within your function will make it easy to replace the CTE with a base table later e.g.
\nWITH YearCodes (year_code, year) AS\n ( SELECT year_code, year\n FROM ( VALUES ( 'Y', 2000 ), \n ( '1', 2001 ), \n ( '2', 2002 ) ) \n AS YearCodes ( year_code, year ) )\nSELECT ...;\n
\nAlternatively, a derived table:
\nSELECT *\n FROM ( VALUES ( 'Y', 2000 ), \n ( '1', 2001 ), \n ( '2', 2002 ) ) \n AS YearCodes ( year_code, year )\n -- other stuff here;\n
\nPerhaps that later base table could be a calendar table.
\n
soup wrap:
Use a Common Table Expression (CTE) within your function will make it easy to replace the CTE with a base table later e.g.
WITH YearCodes (year_code, year) AS
( SELECT year_code, year
FROM ( VALUES ( 'Y', 2000 ),
( '1', 2001 ),
( '2', 2002 ) )
AS YearCodes ( year_code, year ) )
SELECT ...;
Alternatively, a derived table:
SELECT *
FROM ( VALUES ( 'Y', 2000 ),
( '1', 2001 ),
( '2', 2002 ) )
AS YearCodes ( year_code, year )
-- other stuff here;
Perhaps that later base table could be a calendar table.
qid & accept id:
(6094039, 6094075)
query:
Oracle: Updating a table column using ROWNUM in conjunction with ORDER BY clause
soup:
This should work (works for me)
\nupdate table_a outer \nset sequence_column = (\n select rnum from (\n\n -- evaluate row_number() for all rows ordered by your columns\n -- BEFORE updating those values into table_a\n select id, row_number() over (order by column1, column2) rnum \n from table_a) inner \n\n -- join on the primary key to be sure you'll only get one value\n -- for rnum\n where inner.id = outer.id);\n
\nOR you use the MERGE statement. Something like this.
\nmerge into table_a u\nusing (\n select id, row_number() over (order by column1, column2) rnum \n from table_a\n) s\non (u.id = s.id)\nwhen matched then update set u.sequence_column = s.rnum\n
\n
soup wrap:
This should work (works for me)
update table_a outer
set sequence_column = (
select rnum from (
-- evaluate row_number() for all rows ordered by your columns
-- BEFORE updating those values into table_a
select id, row_number() over (order by column1, column2) rnum
from table_a) inner
-- join on the primary key to be sure you'll only get one value
-- for rnum
where inner.id = outer.id);
OR you use the MERGE statement. Something like this.
merge into table_a u
using (
select id, row_number() over (order by column1, column2) rnum
from table_a
) s
on (u.id = s.id)
when matched then update set u.sequence_column = s.rnum
qid & accept id:
(6121779, 6123939)
query:
MYSQL subset operation
soup:
From your pseudo code I guess that you want to check if a (dynamic) list of values is a subset of another list provided by a SELECT. If yes, then a whole table will be shown. If not, no rows will be shown.
\nHere's how to achieve that:
\nSELECT *\nFROM tb_values\nWHERE \n ( SELECT COUNT(DISTINCT value)\n FROM tb_value\n WHERE isgoodvalue = true\n AND value IN (value1, value2, value3)\n ) = 3\n
\n
\nUPDATED after OP's explanation:
\nSELECT *\nFROM project\n JOIN \n ( SELECT projectid\n FROM projectTagMap\n WHERE isgoodvalue = true\n AND tag IN (tag1, tag2, tag3)\n GROUP BY projectid\n HAVING COUNT(*) = 3\n ) AS ok\n ON ok.projectid = project.id\n
\n
soup wrap:
From your pseudo code I guess that you want to check if a (dynamic) list of values is a subset of another list provided by a SELECT. If yes, then a whole table will be shown. If not, no rows will be shown.
Here's how to achieve that:
SELECT *
FROM tb_values
WHERE
( SELECT COUNT(DISTINCT value)
FROM tb_value
WHERE isgoodvalue = true
AND value IN (value1, value2, value3)
) = 3
UPDATED after OP's explanation:
SELECT *
FROM project
JOIN
( SELECT projectid
FROM projectTagMap
WHERE isgoodvalue = true
AND tag IN (tag1, tag2, tag3)
GROUP BY projectid
HAVING COUNT(*) = 3
) AS ok
ON ok.projectid = project.id
qid & accept id:
(6127338, 6127471)
query:
SQL/mysql - Select distinct/UNIQUE but return all columns?
soup:
You're looking for a group by:
\nselect *\nfrom table\ngroup by field1\n
\nWhich can occasionally be written with a distinct on statement:
\nselect distinct on field1 *\nfrom table\n
\nOn most platforms, however, neither of the above will work because the behavior on the other columns is unspecified. (The first works in MySQL, if that's what you're using.)
\nYou could fetch the distinct fields and stick to picking a single arbitrary row each time.
\nOn some platforms (e.g. PostgreSQL, Oracle, T-SQL) this can be done directly using window functions:
\nselect *\nfrom (\n select *,\n row_number() over (partition by field1 order by field2) as row_number\n from table\n ) as rows\nwhere row_number = 1\n
\nOn others (MySQL, SQLite), you'll need to write subqueries that will make you join the entire table with itself (example), so not recommended.
\n
soup wrap:
You're looking for a group by:
select *
from table
group by field1
Which can occasionally be written with a distinct on statement:
select distinct on field1 *
from table
On most platforms, however, neither of the above will work because the behavior on the other columns is unspecified. (The first works in MySQL, if that's what you're using.)
You could fetch the distinct fields and stick to picking a single arbitrary row each time.
On some platforms (e.g. PostgreSQL, Oracle, T-SQL) this can be done directly using window functions:
select *
from (
select *,
row_number() over (partition by field1 order by field2) as row_number
from table
) as rows
where row_number = 1
On others (MySQL, SQLite), you'll need to write subqueries that will make you join the entire table with itself (example), so not recommended.
qid & accept id:
(6159814, 6159840)
query:
RSS to Database - How to Insert String with Any Character?
soup:
Using mysql_real_escape_string with the magic quotes enabled will escape your data twice.
\n\nNote: If magic_quotes_gpc is enabled,\n first apply stripslashes() to the\n data. Using this function\n [mysql_real_escape_string] on data\n which has already been escaped will\n escape the data twice.
\n
\nWhile outputting those content you can use stripslashes function.
\necho stripslashes($data['description']);\n
\nEDIT
\ndesc is mysql reserved word and you must enclose desc in backticks ``
\n$query = "INSERT INTO FEED_CONTENT (title, link, `desc`)\n VALUES (\n '".mysql_real_escape_string($title)."',\n '".$href."',\n '".mysql_real_escape_string($desc)."'\n )";\n
\n
soup wrap:
Using mysql_real_escape_string with the magic quotes enabled will escape your data twice.
Note: If magic_quotes_gpc is enabled,
first apply stripslashes() to the
data. Using this function
[mysql_real_escape_string] on data
which has already been escaped will
escape the data twice.
While outputting those content you can use stripslashes function.
echo stripslashes($data['description']);
EDIT
desc is mysql reserved word and you must enclose desc in backticks ``
$query = "INSERT INTO FEED_CONTENT (title, link, `desc`)
VALUES (
'".mysql_real_escape_string($title)."',
'".$href."',
'".mysql_real_escape_string($desc)."'
)";
qid & accept id:
(6174355, 6174409)
query:
How to copy tables avoiding cursors in SQL?
soup:
You can use the output clause with the merge statement to get a mapping between source id and target id.\nDescribed in this question. Using merge..output to get mapping between source.id and target.id
\nHere is some code that you can test. I use table variables instead of real tables.
\nSetup sample data:
\n-- @A and @B is the source tables\ndeclare @A as table\n(\n id int,\n FK_A_B int,\n name varchar(10)\n)\n\ndeclare @B as table\n(\n id int,\n visible bit\n) \n\n-- Sample data in @A and @B\ninsert into @B values (21, 1),(32, 0)\ninsert into @A values (1, 21, 'n1'),(5, 32, 'n2')\n\n\n-- @C and @D is the target tables with id as identity columns\ndeclare @C as table\n(\n id int identity,\n FK_C_D int not null,\n name varchar(10)\n)\n\ndeclare @D as table\n(\n id int identity,\n visible bit\n) \n\n-- Sample data already in @C and @D\ninsert into @D values (1),(0)\ninsert into @C values (1, 'x1'),(1, 'x2'),(2, 'x3')\n
\nCopy data:
\n-- The @IdMap is a table that holds the mapping between\n-- the @B.id and @D.id (@D.id is an identity column)\ndeclare @IdMap table(TargetID int, SourceID int)\n\n-- Merge from @B to @D.\nmerge @D as D -- Target table\nusing @B as B -- Source table\non 0=1 -- 0=1 means that there are no matches for merge\nwhen not matched then\n insert (visible) values(visible) -- Insert to @D\noutput inserted.id, B.id into @IdMap; -- Capture the newly created inserted.id and\n -- map that to the source (@B.id)\n\n-- Add rows to @C from @A with a join to\n-- @IdMap to get the new id for the FK relation\ninsert into @C(FK_C_D, name)\nselect I.TargetID, A.name \nfrom @A as A\n inner join @IdMap as I\n on A.FK_A_B = I.SourceID\n
\nResult:
\nselect *\nfrom @D as D\n inner join @C as C\n on D.id = C.FK_C_D\n\nid visible id FK_C_D name\n----------- ------- ----------- ----------- ----------\n1 1 1 1 x1\n1 1 2 1 x2\n2 0 3 2 x3\n3 1 4 3 n1\n4 0 5 4 n2\n
\nYou can test the code here: http://data.stackexchange.com/stackoverflow/q/101643/using-merge-to-map-source-id-to-target-id
\n
soup wrap:
You can use the output clause with the merge statement to get a mapping between source id and target id.
Described in this question. Using merge..output to get mapping between source.id and target.id
Here is some code that you can test. I use table variables instead of real tables.
Setup sample data:
-- @A and @B is the source tables
declare @A as table
(
id int,
FK_A_B int,
name varchar(10)
)
declare @B as table
(
id int,
visible bit
)
-- Sample data in @A and @B
insert into @B values (21, 1),(32, 0)
insert into @A values (1, 21, 'n1'),(5, 32, 'n2')
-- @C and @D is the target tables with id as identity columns
declare @C as table
(
id int identity,
FK_C_D int not null,
name varchar(10)
)
declare @D as table
(
id int identity,
visible bit
)
-- Sample data already in @C and @D
insert into @D values (1),(0)
insert into @C values (1, 'x1'),(1, 'x2'),(2, 'x3')
Copy data:
-- The @IdMap is a table that holds the mapping between
-- the @B.id and @D.id (@D.id is an identity column)
declare @IdMap table(TargetID int, SourceID int)
-- Merge from @B to @D.
merge @D as D -- Target table
using @B as B -- Source table
on 0=1 -- 0=1 means that there are no matches for merge
when not matched then
insert (visible) values(visible) -- Insert to @D
output inserted.id, B.id into @IdMap; -- Capture the newly created inserted.id and
-- map that to the source (@B.id)
-- Add rows to @C from @A with a join to
-- @IdMap to get the new id for the FK relation
insert into @C(FK_C_D, name)
select I.TargetID, A.name
from @A as A
inner join @IdMap as I
on A.FK_A_B = I.SourceID
Result:
select *
from @D as D
inner join @C as C
on D.id = C.FK_C_D
id visible id FK_C_D name
----------- ------- ----------- ----------- ----------
1 1 1 1 x1
1 1 2 1 x2
2 0 3 2 x3
3 1 4 3 n1
4 0 5 4 n2
You can test the code here: http://data.stackexchange.com/stackoverflow/q/101643/using-merge-to-map-source-id-to-target-id
qid & accept id:
(6226690, 6227078)
query:
Creating a variable on database to hold global stats
soup:
You could use an indexed view, that SQL Server will automatically maintain:
\ncreate table dbo.users (\n ID int not null,\n Activated bit not null\n)\ngo\ncreate view dbo.user_status_stats (Activated,user_count)\nwith schemabinding\nas\n select Activated,COUNT_BIG(*) from dbo.users group by Activated\ngo\ncreate unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)\ngo\n
\nThis just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:
\nSELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1\n
\nand it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).
\n
\nAlthough as @Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.
\n
soup wrap:
You could use an indexed view, that SQL Server will automatically maintain:
create table dbo.users (
ID int not null,
Activated bit not null
)
go
create view dbo.user_status_stats (Activated,user_count)
with schemabinding
as
select Activated,COUNT_BIG(*) from dbo.users group by Activated
go
create unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)
go
This just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:
SELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1
and it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).
Although as @Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.
qid & accept id:
(6227934, 6229720)
query:
Create a view/temporary table from a column with CSV
soup:
I don't think this is an exact duplicate of the question referenced in the close votes. Similar yes, but not the same.
\nNot exactly beautiful, but:
\nCREATE OR REPLACE VIEW your_view AS\nSELECT tt.ID, SUBSTR(value, sp, ep-sp) split, other_col1, other_col2...\n FROM (SELECT id, value\n , INSTR(','||value, ',', 1, L) sp -- 1st posn of substr at this level\n , INSTR(value||',', ',', 1, L) ep -- posn of delimiter at this level\n FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q -- 20 is max #substrings\n ON LENGTH(value)-LENGTH(REPLACE(value,','))+1 >= L \n) qq JOIN tt on qq.id = tt.id;\n
\nwhere tt is your table.
\nWorks for csv values longer than 1 or null. The CONNECT BY LEVEL < 20 is arbitrary, adjust for your situation.
\nTo illustrate:
\n SQL> CREATE TABLE tt (ID INTEGER, c VARCHAR2(20), othercol VARCHAR2(20));\n\n Table created\n SQL> INSERT INTO tt VALUES (1, 'a,b,c', 'val1');\n\n 1 row inserted\n SQL> INSERT INTO tt VALUES (2, 'd,e,f,g', 'val2');\n\n 1 row inserted\n SQL> INSERT INTO tt VALUES (3, 'a,f', 'val3');\n\n 1 row inserted\n SQL> INSERT INTO tt VALUES (4,'aa,bbb,cccc', 'val4');\n\n 1 row inserted\n SQL> CREATE OR REPLACE VIEW myview AS\n 2 SELECT tt.ID, SUBSTR(c, sp, ep-sp+1) splitval, othercol\n 3 FROM (SELECT ID\n 4 , INSTR(','||c,',',1,L) sp, INSTR(c||',',',',1,L)-1 ep\n 5 FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q\n 6 ON LENGTH(c)-LENGTH(REPLACE(c,','))+1 >= L\n 7 ) q JOIN tt ON q.id =tt.id;\n\n View created\n SQL> select * from myview order by 1,2;\n\n ID SPLITVAL OTHERCOL\n--------------------------------------- -------------------- --------------------\n 1 a val1\n 1 b val1\n 1 c val1\n 2 d val2\n 2 e val2\n 2 f val2\n 2 g val2\n 3 a val3\n 3 f val3\n 4 aa val4\n 4 bbb val4\n 4 cccc val4\n\n12 rows selected\n\nSQL> \n
\n
soup wrap:
I don't think this is an exact duplicate of the question referenced in the close votes. Similar yes, but not the same.
Not exactly beautiful, but:
CREATE OR REPLACE VIEW your_view AS
SELECT tt.ID, SUBSTR(value, sp, ep-sp) split, other_col1, other_col2...
FROM (SELECT id, value
, INSTR(','||value, ',', 1, L) sp -- 1st posn of substr at this level
, INSTR(value||',', ',', 1, L) ep -- posn of delimiter at this level
FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q -- 20 is max #substrings
ON LENGTH(value)-LENGTH(REPLACE(value,','))+1 >= L
) qq JOIN tt on qq.id = tt.id;
where tt is your table.
Works for csv values longer than 1 or null. The CONNECT BY LEVEL < 20 is arbitrary, adjust for your situation.
To illustrate:
SQL> CREATE TABLE tt (ID INTEGER, c VARCHAR2(20), othercol VARCHAR2(20));
Table created
SQL> INSERT INTO tt VALUES (1, 'a,b,c', 'val1');
1 row inserted
SQL> INSERT INTO tt VALUES (2, 'd,e,f,g', 'val2');
1 row inserted
SQL> INSERT INTO tt VALUES (3, 'a,f', 'val3');
1 row inserted
SQL> INSERT INTO tt VALUES (4,'aa,bbb,cccc', 'val4');
1 row inserted
SQL> CREATE OR REPLACE VIEW myview AS
2 SELECT tt.ID, SUBSTR(c, sp, ep-sp+1) splitval, othercol
3 FROM (SELECT ID
4 , INSTR(','||c,',',1,L) sp, INSTR(c||',',',',1,L)-1 ep
5 FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q
6 ON LENGTH(c)-LENGTH(REPLACE(c,','))+1 >= L
7 ) q JOIN tt ON q.id =tt.id;
View created
SQL> select * from myview order by 1,2;
ID SPLITVAL OTHERCOL
--------------------------------------- -------------------- --------------------
1 a val1
1 b val1
1 c val1
2 d val2
2 e val2
2 f val2
2 g val2
3 a val3
3 f val3
4 aa val4
4 bbb val4
4 cccc val4
12 rows selected
SQL>
qid & accept id:
(6254626, 6255892)
query:
performing a sort of "reverse lookup" in sql server
soup:
Why not get both sets of comments at once?
\nSELECT\n ...\nFROM\n Products P\n LEFT JOIN Comments C\n ON P.ProductID LIKE C.SpecID + '%'\n OR P.ProductID LIKE '%-' + C.SpecID\n
\nAlso you could consider:
\nSELECT\n ...\nFROM\n Products P\n LEFT JOIN Comments C\n ON (Len(C.SpecID) = 2 AND P.ProductID LIKE C.SpecID + '%')\n OR (Len(C.SpecID) > 2 AND P.ProductID LIKE '%-' + C.SpecID)\n
\nTesting is in order to see if one performs better than the other. If you find the queries to be too slow, then trying adding some persisted calculated columns: in Products to specify whether the product ID has a dash in it or not, and in Comments add two columns, one with only product IDs and one with only suffices. Indexes on these columns could help.
\nALTER TABLE Comments ADD ExactSpecID AS \n (CASE WHEN Len(SpecID) > 2 THEN SpecID ELSE NULL END) PERSISTED\nALTER TABLE Comments ADD Suffix AS \n (CASE WHEN Len(SpecID) = 2 THEN SpecID ELSE NULL END) PERSISTED\n
\n
soup wrap:
Why not get both sets of comments at once?
SELECT
...
FROM
Products P
LEFT JOIN Comments C
ON P.ProductID LIKE C.SpecID + '%'
OR P.ProductID LIKE '%-' + C.SpecID
Also you could consider:
SELECT
...
FROM
Products P
LEFT JOIN Comments C
ON (Len(C.SpecID) = 2 AND P.ProductID LIKE C.SpecID + '%')
OR (Len(C.SpecID) > 2 AND P.ProductID LIKE '%-' + C.SpecID)
Testing is in order to see if one performs better than the other. If you find the queries to be too slow, then trying adding some persisted calculated columns: in Products to specify whether the product ID has a dash in it or not, and in Comments add two columns, one with only product IDs and one with only suffices. Indexes on these columns could help.
ALTER TABLE Comments ADD ExactSpecID AS
(CASE WHEN Len(SpecID) > 2 THEN SpecID ELSE NULL END) PERSISTED
ALTER TABLE Comments ADD Suffix AS
(CASE WHEN Len(SpecID) = 2 THEN SpecID ELSE NULL END) PERSISTED
qid & accept id:
(6267954, 6268173)
query:
SQL SELECT complex expression in column - additional boolean
soup:
you can go with left outer join
\nselect \na.article_id, a.article_body, \nua.article_id as as been_read --will be not null for read articles\nfrom Articles a \nleft outer join Users_Articles ua \n on (ua.article_id = a.article_id and ua.user_id = $current_user_id)\n
\nor with subselect
\nselect \na.article_id, a.article_body, \n(select 1 from Users_Articles ua \n where ua.article_id = a.article_id \n and ua.user_id = $current_user_id) as been_read --will be not null for read articles\nfrom Articles a\n
\n
soup wrap:
you can go with left outer join
select
a.article_id, a.article_body,
ua.article_id as as been_read --will be not null for read articles
from Articles a
left outer join Users_Articles ua
on (ua.article_id = a.article_id and ua.user_id = $current_user_id)
or with subselect
select
a.article_id, a.article_body,
(select 1 from Users_Articles ua
where ua.article_id = a.article_id
and ua.user_id = $current_user_id) as been_read --will be not null for read articles
from Articles a
qid & accept id:
(6280565, 6284057)
query:
Unique constraint over multiple tables
soup:
You could try the following. You have to create a redundant UNIQUE constraint on (id, aId) in Parent (SQL is pretty dumb isn't it?!).
\nCREATE TABLE Child\n(parentId INTEGER NOT NULL,\n aId INTEGER NOT NULL UNIQUE,\nFOREIGN KEY (parentId,aId) REFERENCES Parent (id,aId),\ncreatedOn TIMESTAMP NOT NULL);\n
\nPossibly a much better solution would be to drop parentId from the Child table altogether, add bId instead and just reference the Parent table based on (aId, bId):
\nCREATE TABLE Child\n(aId INTEGER NOT NULL UNIQUE,\n bId INTEGER NOT NULL,\nFOREIGN KEY (aId,bId) REFERENCES Parent (aId,bId),\ncreatedOn TIMESTAMP NOT NULL);\n
\nIs there any reason why you can't do that?
\n
soup wrap:
You could try the following. You have to create a redundant UNIQUE constraint on (id, aId) in Parent (SQL is pretty dumb isn't it?!).
CREATE TABLE Child
(parentId INTEGER NOT NULL,
aId INTEGER NOT NULL UNIQUE,
FOREIGN KEY (parentId,aId) REFERENCES Parent (id,aId),
createdOn TIMESTAMP NOT NULL);
Possibly a much better solution would be to drop parentId from the Child table altogether, add bId instead and just reference the Parent table based on (aId, bId):
CREATE TABLE Child
(aId INTEGER NOT NULL UNIQUE,
bId INTEGER NOT NULL,
FOREIGN KEY (aId,bId) REFERENCES Parent (aId,bId),
createdOn TIMESTAMP NOT NULL);
Is there any reason why you can't do that?
qid & accept id:
(6295231, 6295559)
query:
Ordering a MySQL query with LEFT JOIN
soup:
I think I've cracked it! The following query seems to give me what I need:
\nSELECT c.id, c.name, h.winner\nFROM championships c\nLEFT JOIN title_history h\nON c.id = h.championship\nGROUP BY c.id\nORDER BY c.rank ASC, h.date_from ASC\n
\nEDIT: I haven't cracked it. As I'm grouping by championship ID, I'm now only getting the first title winner, even if there have been title winners after.
\nEDIT 2: Solved with the following query:
\nSELECT friendly_name,\n(SELECT winner FROM title_history WHERE championship = c.id ORDER BY date_from DESC LIMIT 1) \nFROM championships AS c\nORDER BY name\n
\n
soup wrap:
I think I've cracked it! The following query seems to give me what I need:
SELECT c.id, c.name, h.winner
FROM championships c
LEFT JOIN title_history h
ON c.id = h.championship
GROUP BY c.id
ORDER BY c.rank ASC, h.date_from ASC
EDIT: I haven't cracked it. As I'm grouping by championship ID, I'm now only getting the first title winner, even if there have been title winners after.
EDIT 2: Solved with the following query:
SELECT friendly_name,
(SELECT winner FROM title_history WHERE championship = c.id ORDER BY date_from DESC LIMIT 1)
FROM championships AS c
ORDER BY name
qid & accept id:
(6295650, 6295878)
query:
SQL query to search by day/month/year/day&month/day&year etc
soup:
You can write maintainable queries that additionally are fast by using the pg/temporal extension:
\nhttps://github.com/jeff-davis/PostgreSQL-Temporal
\ncreate index on events using gist(period(start_date, end_date));\n\nselect *\nfrom events\nwhere period(start_date, end_date) @> :date;\n\nselect *\nfrom events\nwhere period(start_date, end_date) && period(:start, :end);\n
\nYou can even use it to disallow overlaps as a table constraint:
\nalter table events\nadd constraint overlap_excl\nexclude using gist(period(start_date, end_date) WITH &&);\n
\n
\n\nwrite all possible from, to and day/month/year combinations - not maintable
\n
\nIt's actually more maintainable than you might think, e.g.:
\nselect *\nfrom events\njoin generate_series(:start_date, :end_date, :interval) as datetime\non start_date <= datetime and datetime < end_date;\n
\nBut it's much better to use the above-mentioned period type.
\n
soup wrap:
You can write maintainable queries that additionally are fast by using the pg/temporal extension:
https://github.com/jeff-davis/PostgreSQL-Temporal
create index on events using gist(period(start_date, end_date));
select *
from events
where period(start_date, end_date) @> :date;
select *
from events
where period(start_date, end_date) && period(:start, :end);
You can even use it to disallow overlaps as a table constraint:
alter table events
add constraint overlap_excl
exclude using gist(period(start_date, end_date) WITH &&);
write all possible from, to and day/month/year combinations - not maintable
It's actually more maintainable than you might think, e.g.:
select *
from events
join generate_series(:start_date, :end_date, :interval) as datetime
on start_date <= datetime and datetime < end_date;
But it's much better to use the above-mentioned period type.
qid & accept id:
(6333687, 6333737)
query:
TSQL counting how many occurrences on each day
soup:
SELECT\n DateWithNoTimePortion = DateAdd(Day, DateDiff(Day, '19000101', DateCol), '19000101'),\n VisitorCount = Count(*)\nFROM Log\nGROUP BY DateDiff(Day, 0, DateCol);\n
\nFor some reason I assumed you were using SQL Server. If that is not true, please let us know. I think the DateDiff method could work for you in other DBMSes depending on the functions they support, but they may have better ways to do the job (such as TRUNC in Oracle).
\nIn SQL Server the above method is one of the fastest ways of doing the job. There are only two faster ways:
\n\nIntrinsic int-conversion rounding :
\nConvert(datetime, Convert(int, DateCol - '12:00:00.003'))\n
\nIf using SQL Server 2008 and up, this is the fastest of all (and you should use it if that's what you have):
\nConvert(date, DateCol)\n
\n
\nWhen SQL Server 2008 is not available, I think the method I posted is the best mix of speed and clarity for future developers looking at the code, avoiding doing magic stuff that isn't clear. You can see the tests backing up my speed claims.
\n
soup wrap:
SELECT
DateWithNoTimePortion = DateAdd(Day, DateDiff(Day, '19000101', DateCol), '19000101'),
VisitorCount = Count(*)
FROM Log
GROUP BY DateDiff(Day, 0, DateCol);
For some reason I assumed you were using SQL Server. If that is not true, please let us know. I think the DateDiff method could work for you in other DBMSes depending on the functions they support, but they may have better ways to do the job (such as TRUNC in Oracle).
In SQL Server the above method is one of the fastest ways of doing the job. There are only two faster ways:
Intrinsic int-conversion rounding :
Convert(datetime, Convert(int, DateCol - '12:00:00.003'))
If using SQL Server 2008 and up, this is the fastest of all (and you should use it if that's what you have):
Convert(date, DateCol)
When SQL Server 2008 is not available, I think the method I posted is the best mix of speed and clarity for future developers looking at the code, avoiding doing magic stuff that isn't clear. You can see the tests backing up my speed claims.
qid & accept id:
(6355613, 6355807)
query:
Xml elements present in spite of null values
soup:
\nwithout changing the FOR XML PATH into\n FOR XML ELEMENTS to use the XSINIL\n switch
\n
\nYou can use elements xsinil with for xml path.
\ndeclare @T table (ID int identity, Name varchar(50))\n\ninsert into @T values ('Name1')\ninsert into @T values (null)\ninsert into @T values ('Name2')\n\nselect\n ID,\n Name\nfrom @T\nfor xml path('item'), root('root'), elements xsinil\n
\nResult:
\n\n - \n
1 \n Name1 \n \n - \n
2 \n \n \n - \n
3 \n Name2 \n \n \n
\n
soup wrap:
without changing the FOR XML PATH into
FOR XML ELEMENTS to use the XSINIL
switch
You can use elements xsinil with for xml path.
declare @T table (ID int identity, Name varchar(50))
insert into @T values ('Name1')
insert into @T values (null)
insert into @T values ('Name2')
select
ID,
Name
from @T
for xml path('item'), root('root'), elements xsinil
Result:
-
1
Name1
-
2
-
3
Name2
qid & accept id:
(6404158, 6404187)
query:
How to remove a prefix name from every table name in a mysql database
soup:
You can generate the necessary statements with a single query:
\nselect 'RENAME TABLE ' || table_name || ' TO ' || substr(table_name, 5) ||';'\nfrom information_schema.tables\n
\nSave the output of that query to a file and you have all the statements you need.
\nOr if that returns 0s and 1s rather the statemenets, here's the version using concat instead:
\nselect concat('RENAME TABLE ', concat(table_name, concat(' TO ', concat(substr(table_name, 5), ';'))))\nfrom information_schema.tables;\n
\n
soup wrap:
You can generate the necessary statements with a single query:
select 'RENAME TABLE ' || table_name || ' TO ' || substr(table_name, 5) ||';'
from information_schema.tables
Save the output of that query to a file and you have all the statements you need.
Or if that returns 0s and 1s rather the statemenets, here's the version using concat instead:
select concat('RENAME TABLE ', concat(table_name, concat(' TO ', concat(substr(table_name, 5), ';'))))
from information_schema.tables;
qid & accept id:
(6418214, 6419482)
query:
Table Normalization (Parse comma separated fields into individual records)
soup:
-- Setup:
\ndeclare @Device table(DeviceId int primary key, Parts varchar(1000))\ndeclare @Part table(PartId int identity(1,1) primary key, PartName varchar(100))\ndeclare @DevicePart table(DeviceId int, PartId int)\n\ninsert @Device\nvalues\n (1, 'Part1, Part2, Part3'),\n (2, 'Part2, Part3, Part4'),\n (3, 'Part1')\n
\n--Script:
\ndeclare @DevicePartTemp table(DeviceId int, PartName varchar(100))\n\ninsert @DevicePartTemp\nselect DeviceId, ltrim(x.value('.', 'varchar(100)'))\nfrom\n(\n select DeviceId, cast('' + replace(Parts, ',', ' ') + ' ' as xml) XmlColumn\n from @Device\n)tt\ncross apply\n XmlColumn.nodes('x') as Nodes(x)\n\n\ninsert @Part\nselect distinct PartName\nfrom @DevicePartTemp\n\ninsert @DevicePart\nselect tmp.DeviceId, prt.PartId\nfrom @DevicePartTemp tmp \n join @Part prt on\n prt.PartName = tmp.PartName\n
\n-- Result:
\nselect *\nfrom @Part\n\nPartId PartName\n----------- ---------\n1 Part1\n2 Part2\n3 Part3\n4 Part4\n\n\nselect *\nfrom @DevicePart\n\nDeviceId PartId\n----------- -----------\n1 1\n1 2\n1 3\n2 2\n2 3\n2 4\n3 1 \n
\n
soup wrap:
-- Setup:
declare @Device table(DeviceId int primary key, Parts varchar(1000))
declare @Part table(PartId int identity(1,1) primary key, PartName varchar(100))
declare @DevicePart table(DeviceId int, PartId int)
insert @Device
values
(1, 'Part1, Part2, Part3'),
(2, 'Part2, Part3, Part4'),
(3, 'Part1')
--Script:
declare @DevicePartTemp table(DeviceId int, PartName varchar(100))
insert @DevicePartTemp
select DeviceId, ltrim(x.value('.', 'varchar(100)'))
from
(
select DeviceId, cast('' + replace(Parts, ',', ' ') + ' ' as xml) XmlColumn
from @Device
)tt
cross apply
XmlColumn.nodes('x') as Nodes(x)
insert @Part
select distinct PartName
from @DevicePartTemp
insert @DevicePart
select tmp.DeviceId, prt.PartId
from @DevicePartTemp tmp
join @Part prt on
prt.PartName = tmp.PartName
-- Result:
select *
from @Part
PartId PartName
----------- ---------
1 Part1
2 Part2
3 Part3
4 Part4
select *
from @DevicePart
DeviceId PartId
----------- -----------
1 1
1 2
1 3
2 2
2 3
2 4
3 1
qid & accept id:
(6434996, 6455095)
query:
Manipulate the sort result considering the user preference - database
soup:
If you want each user to have independent sort orders, you need another table.
\nCREATE TABLE user_sort_order (\n name VARCHAR(?) NOT NULL REFERENCES your-other-table (name),\n user_id INTEGER NOT NULL REFERENCES users (user_id),\n sort_order INTEGER NOT NULL -- Could be float or decimal\n);\n
\nThen ordering is easy.
\nSELECT name \nFROM user_sort_order\nWHERE user_id = ?\nORDER BY sort_order\n
\nThere's no magic bullet for updating.
\n\n- Delete all the user's rows, and insert rows with the new order. (Brute force always works.)
\n- Update every row with the new order. (Could be a lot of UPDATE statements.)
\n- Track the changes in your app, and update only the changed rows and the rows that have to be "bumped" by the changes. (Parsimonious, but error-prone.)
\n- Don't let users impose their own sort order. (Usually not as bad an idea as it sounds.)
\n
\n
soup wrap:
If you want each user to have independent sort orders, you need another table.
CREATE TABLE user_sort_order (
name VARCHAR(?) NOT NULL REFERENCES your-other-table (name),
user_id INTEGER NOT NULL REFERENCES users (user_id),
sort_order INTEGER NOT NULL -- Could be float or decimal
);
Then ordering is easy.
SELECT name
FROM user_sort_order
WHERE user_id = ?
ORDER BY sort_order
There's no magic bullet for updating.
- Delete all the user's rows, and insert rows with the new order. (Brute force always works.)
- Update every row with the new order. (Could be a lot of UPDATE statements.)
- Track the changes in your app, and update only the changed rows and the rows that have to be "bumped" by the changes. (Parsimonious, but error-prone.)
- Don't let users impose their own sort order. (Usually not as bad an idea as it sounds.)
qid & accept id:
(6440318, 6440437)
query:
Oracle : Automatic modification date on update
soup:
You thought wrongly, Oracle does what you order it to do.
\nYou can either try
\nupdate mytable a set title = \n (select title from mytable2 b \n where b.id = a.id and \n b.title != a.title)\n
\nor change the trigger to specifically check for a different title name.
\ncreate or replace\nTRIGGER schema.name_of_trigger\nBEFORE UPDATE ON schema.name_of_table\nFOR EACH ROW\nBEGIN\n-- Check for modification of title:\n if :new.title != :old.title then\n :new.modify_date := sysdate;\n end if;\nEND;\n
\n
soup wrap:
You thought wrongly, Oracle does what you order it to do.
You can either try
update mytable a set title =
(select title from mytable2 b
where b.id = a.id and
b.title != a.title)
or change the trigger to specifically check for a different title name.
create or replace
TRIGGER schema.name_of_trigger
BEFORE UPDATE ON schema.name_of_table
FOR EACH ROW
BEGIN
-- Check for modification of title:
if :new.title != :old.title then
:new.modify_date := sysdate;
end if;
END;
qid & accept id:
(6468506, 6468848)
query:
Can I delete the most recent record without sub-select in Oracle?
soup:
The most readable way is probably what you wrote. But it can be very wasteful depending on various factors. In particular, if there is no index on process_date it likely has to do 2 full table scans.
\nThe difficulty in writing something that is both simple and more efficient, is that any view of the table that includes a ranking or ordering will also not allow modifications.
\nHere's one alternate way to approach it, using PL/SQL, that will probably be more efficient in some cases but is clearly less readable.
\nDECLARE\n CURSOR delete_cur IS\n SELECT /*+ FIRST_ROWS(1) */\n NULL\n FROM daily_statistics\n ORDER BY process_date DESC\n FOR UPDATE;\n trash CHAR(1);\nBEGIN\n OPEN delete_cur;\n FETCH delete_cur INTO trash;\n IF delete_cur%FOUND THEN\n DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n END IF;\n CLOSE delete_cur;\nEND;\n/\n
\nAlso note this potentially produces different results from your statement if there can be multiple rows with the same process_date value. To make it handle duplicates requires a little more complexity:
\nDECLARE\n CURSOR delete_cur IS\n SELECT /*+ FIRST_ROWS(1) */\n process_date\n FROM daily_statistics\n ORDER BY process_date DESC\n FOR UPDATE;\n del_date DATE;\n next_date DATE;\nBEGIN\n OPEN delete_cur;\n FETCH delete_cur INTO del_date;\n IF delete_cur%FOUND THEN\n DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n END IF;\n LOOP\n FETCH delete_cur INTO next_date;\n EXIT WHEN delete_cur%NOTFOUND OR next_date <> del_date;\n DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n END LOOP;\n CLOSE delete_cur;\nEND;\n/\n
\n
soup wrap:
The most readable way is probably what you wrote. But it can be very wasteful depending on various factors. In particular, if there is no index on process_date it likely has to do 2 full table scans.
The difficulty in writing something that is both simple and more efficient, is that any view of the table that includes a ranking or ordering will also not allow modifications.
Here's one alternate way to approach it, using PL/SQL, that will probably be more efficient in some cases but is clearly less readable.
DECLARE
CURSOR delete_cur IS
SELECT /*+ FIRST_ROWS(1) */
NULL
FROM daily_statistics
ORDER BY process_date DESC
FOR UPDATE;
trash CHAR(1);
BEGIN
OPEN delete_cur;
FETCH delete_cur INTO trash;
IF delete_cur%FOUND THEN
DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
END IF;
CLOSE delete_cur;
END;
/
Also note this potentially produces different results from your statement if there can be multiple rows with the same process_date value. To make it handle duplicates requires a little more complexity:
DECLARE
CURSOR delete_cur IS
SELECT /*+ FIRST_ROWS(1) */
process_date
FROM daily_statistics
ORDER BY process_date DESC
FOR UPDATE;
del_date DATE;
next_date DATE;
BEGIN
OPEN delete_cur;
FETCH delete_cur INTO del_date;
IF delete_cur%FOUND THEN
DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
END IF;
LOOP
FETCH delete_cur INTO next_date;
EXIT WHEN delete_cur%NOTFOUND OR next_date <> del_date;
DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
END LOOP;
CLOSE delete_cur;
END;
/
qid & accept id:
(6524697, 6524885)
query:
using like operator in html5 database query
soup:
If you simply replace the '=' with a LIKE operator, you will get the same exact match answer as your current query. I assume you would like to use the LIKE operator to do something different (such as a begins with search).
\nI provided you how SQL databases normally does this, but if this works for you depends on how SQL compatible the SQL dialect being used by the HTML5 engine is.
\nFirstly, it depends on concaternation syntax. Secondly, it depends on concaternation using NULL + string produces NULL or the string. Most professional databases would yield NULL (this is good for you, because then this will work).
\nThe following should work on MySQL or Oracle and some other databases:
\nSELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( CONCAT(?,'%'), firstname)\nAND lastname LIKE IFNULL( CONCAT(?,'%'), lastname)\nAND baughtgift LIKE IFNULL( CONCAT(?,'%'), baughtgift)\nORDER BY firstname asc\n
\nor (for Oracle, Postgre and others)
\nSELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( ? ||'%', firstname)\nAND lastname LIKE IFNULL( ? || '%', lastname)\nAND baughtgift LIKE IFNULL( ? || '%', baughtgift)\nORDER BY firstname asc\n
\nor (for SQL server and others)
\nSELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( ? +'%', firstname)\nAND lastname LIKE IFNULL( ? + '%', lastname)\nAND baughtgift LIKE IFNULL( ? + '%', baughtgift)\nORDER BY firstname asc\n
\nI would try the last one first. If the above does not work and you get all bdreminders, the database does not concaternate NULL+string to NULL. In this case, I don't think you can use ISNULL as it will return the first non null value and thus, always return '%'.
\n
soup wrap:
If you simply replace the '=' with a LIKE operator, you will get the same exact match answer as your current query. I assume you would like to use the LIKE operator to do something different (such as a begins with search).
I provided you how SQL databases normally does this, but if this works for you depends on how SQL compatible the SQL dialect being used by the HTML5 engine is.
Firstly, it depends on concaternation syntax. Secondly, it depends on concaternation using NULL + string produces NULL or the string. Most professional databases would yield NULL (this is good for you, because then this will work).
The following should work on MySQL or Oracle and some other databases:
SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( CONCAT(?,'%'), firstname)
AND lastname LIKE IFNULL( CONCAT(?,'%'), lastname)
AND baughtgift LIKE IFNULL( CONCAT(?,'%'), baughtgift)
ORDER BY firstname asc
or (for Oracle, Postgre and others)
SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( ? ||'%', firstname)
AND lastname LIKE IFNULL( ? || '%', lastname)
AND baughtgift LIKE IFNULL( ? || '%', baughtgift)
ORDER BY firstname asc
or (for SQL server and others)
SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( ? +'%', firstname)
AND lastname LIKE IFNULL( ? + '%', lastname)
AND baughtgift LIKE IFNULL( ? + '%', baughtgift)
ORDER BY firstname asc
I would try the last one first. If the above does not work and you get all bdreminders, the database does not concaternate NULL+string to NULL. In this case, I don't think you can use ISNULL as it will return the first non null value and thus, always return '%'.
qid & accept id:
(6536396, 6537272)
query:
How to convert two lists into adjacency matrix SQL Server T-SQL?
soup:
It was hard to avoid those null values in the pivot.
\ndeclare @t table (fruit varchar(10), colour varchar(10))\n\ninsert @t\nselect 'Apple', 'Red' union all\nselect 'Orange', 'Red' union all\nselect 'Berry', 'Green' union all\nselect 'PineApple', 'Green'\n\nselect * from (\nselect a.fruit, b.colour, case when c.fruit is null then 0 else 1 end found from \n(select distinct fruit, colour from @t) a\ncross join \n(select distinct colour from @t) b\nleft outer join \n(select distinct fruit, colour from @t) c\non a.fruit = c.fruit and b.colour = c.colour) d\nPIVOT\n(max(found) \nFOR colour\nin([red],[green]) \n)AS p\norder by 3, 1 \n
\nOutput
\nfruit red green\n---------- ----------- -----------\nApple 1 0\nOrange 1 0\nBerry 0 1\nPineApple 0 1\n
\n
soup wrap:
It was hard to avoid those null values in the pivot.
declare @t table (fruit varchar(10), colour varchar(10))
insert @t
select 'Apple', 'Red' union all
select 'Orange', 'Red' union all
select 'Berry', 'Green' union all
select 'PineApple', 'Green'
select * from (
select a.fruit, b.colour, case when c.fruit is null then 0 else 1 end found from
(select distinct fruit, colour from @t) a
cross join
(select distinct colour from @t) b
left outer join
(select distinct fruit, colour from @t) c
on a.fruit = c.fruit and b.colour = c.colour) d
PIVOT
(max(found)
FOR colour
in([red],[green])
)AS p
order by 3, 1
Output
fruit red green
---------- ----------- -----------
Apple 1 0
Orange 1 0
Berry 0 1
PineApple 0 1
qid & accept id:
(6551214, 6556239)
query:
MySQL GROUP BY DateTime +/- 3 seconds
soup:
I'm using Tom H.'s excellent idea but doing it a little differently here:
\nInstead of finding all the rows that are the beginnings of chains, we can find all times that are the beginnings of chains, then go back and ifnd the rows that match the times.
\nQuery #1 here should tell you which times are the beginnings of chains by finding which times do not have any times below them but within 3 seconds:
\nSELECT DISTINCT Timestamp\nFROM Table a\nLEFT JOIN Table b\nON (b.Timestamp >= a.TimeStamp - INTERVAL 3 SECONDS\n AND b.Timestamp < a.Timestamp)\nWHERE b.Timestamp IS NULL\n
\nAnd then for each row, we can find the largest chain-starting timestamp that is less than our timestamp with Query #2:
\nSELECT Table.id, MAX(StartOfChains.TimeStamp) AS ChainStartTime\nFROM Table\nJOIN ([query #1]) StartofChains\nON Table.Timestamp >= StartOfChains.TimeStamp\nGROUP BY Table.id\n
\nOnce we have that, we can GROUP BY it as you wanted.
\nSELECT COUNT(*) --or whatever\nFROM Table\nJOIN ([query #2]) GroupingQuery\nON Table.id = GroupingQuery.id\nGROUP BY GroupingQuery.ChainStartTime\n
\nI'm not entirely sure this is distinct enough from Tom H's answer to be posted separately, but it sounded like you were having trouble with implementation, and I was thinking about it, so I thought I'd post again. Good luck!
\n
soup wrap:
I'm using Tom H.'s excellent idea but doing it a little differently here:
Instead of finding all the rows that are the beginnings of chains, we can find all times that are the beginnings of chains, then go back and ifnd the rows that match the times.
Query #1 here should tell you which times are the beginnings of chains by finding which times do not have any times below them but within 3 seconds:
SELECT DISTINCT Timestamp
FROM Table a
LEFT JOIN Table b
ON (b.Timestamp >= a.TimeStamp - INTERVAL 3 SECONDS
AND b.Timestamp < a.Timestamp)
WHERE b.Timestamp IS NULL
And then for each row, we can find the largest chain-starting timestamp that is less than our timestamp with Query #2:
SELECT Table.id, MAX(StartOfChains.TimeStamp) AS ChainStartTime
FROM Table
JOIN ([query #1]) StartofChains
ON Table.Timestamp >= StartOfChains.TimeStamp
GROUP BY Table.id
Once we have that, we can GROUP BY it as you wanted.
SELECT COUNT(*) --or whatever
FROM Table
JOIN ([query #2]) GroupingQuery
ON Table.id = GroupingQuery.id
GROUP BY GroupingQuery.ChainStartTime
I'm not entirely sure this is distinct enough from Tom H's answer to be posted separately, but it sounded like you were having trouble with implementation, and I was thinking about it, so I thought I'd post again. Good luck!
qid & accept id:
(6591613, 6591653)
query:
DB: saving user's height and weight
soup:
There are several ways... one is to just have two numeric columns, one for height, one for weight, then do the conversions (if necessary) at display time. Another is to create a "height" table and a "weight" table, each with a primary key that is linked from another table. Then you can store both English and metric values in these tables (along with any other meta info you want):
\nCREATE TABLE height (\n id SERIAL PRIMARY KEY,\n english VARCHAR,\n inches INT,\n cm INT,\n hands INT // As in, the height of a horse\n);\n\nINSERT INTO height VALUES\n (1,'4 feet', 48, 122, 12),\n (2,'4 feet, 1 inch', 49, 124, 12),\n (3,'4 feet, 2 inches', 50, 127, 12),\n (3,'4 feet, 3 inches', 51, 130, 12),\n ....\n
\nYou get the idea...
\nThen your users table will reference the height and weight tables--and possibly many other dimension tables--astrological sign, marital status, etc.
\nCREATE TABLE users (\n uid SERIAL PRIMARY KEY,\n height INT REFERENCES height(id),\n weight INT references weight(id),\n sign INT references sign(id),\n ...\n);\n
\nThen to do a search for users between 4 and 5 feet:
\nSELECT *\nFROM users\nJOIN height ON users.height = height.id\nWHERE height.inches >= 48 AND height.inches <= 60;\n
\nSeveral advantages to this method:
\n\n- You don't have to duplicate the "effort" (as if it were any real work) to do the conversion on display--just select the format you wish to display!
\n- It makes populating drop-down boxes in an HTML select super easy--just
SELECT english FROM height ORDER BY inches, for instance. \n- It makes your logic for various dimensions--including non-numerical ones (like astrological signs) obviously similar--you don't have special case code all over the place for each data type.
\n- It scales really well
\n- It makes it easy to add new representations of your data (for instance, to add the 'hands' column to the height table)
\n
\n
soup wrap:
There are several ways... one is to just have two numeric columns, one for height, one for weight, then do the conversions (if necessary) at display time. Another is to create a "height" table and a "weight" table, each with a primary key that is linked from another table. Then you can store both English and metric values in these tables (along with any other meta info you want):
CREATE TABLE height (
id SERIAL PRIMARY KEY,
english VARCHAR,
inches INT,
cm INT,
hands INT // As in, the height of a horse
);
INSERT INTO height VALUES
(1,'4 feet', 48, 122, 12),
(2,'4 feet, 1 inch', 49, 124, 12),
(3,'4 feet, 2 inches', 50, 127, 12),
(3,'4 feet, 3 inches', 51, 130, 12),
....
You get the idea...
Then your users table will reference the height and weight tables--and possibly many other dimension tables--astrological sign, marital status, etc.
CREATE TABLE users (
uid SERIAL PRIMARY KEY,
height INT REFERENCES height(id),
weight INT references weight(id),
sign INT references sign(id),
...
);
Then to do a search for users between 4 and 5 feet:
SELECT *
FROM users
JOIN height ON users.height = height.id
WHERE height.inches >= 48 AND height.inches <= 60;
Several advantages to this method:
- You don't have to duplicate the "effort" (as if it were any real work) to do the conversion on display--just select the format you wish to display!
- It makes populating drop-down boxes in an HTML select super easy--just
SELECT english FROM height ORDER BY inches, for instance.
- It makes your logic for various dimensions--including non-numerical ones (like astrological signs) obviously similar--you don't have special case code all over the place for each data type.
- It scales really well
- It makes it easy to add new representations of your data (for instance, to add the 'hands' column to the height table)
qid & accept id:
(6611453, 6612326)
query:
PostgreSQL: trying to find miss and mister of the last month with highest rating
soup:
Say you run it once on the first day of the month, and cache the results, since counting votes on every page is kinda useless.
\nFirst some date arithmetic :
\nSELECT now(), \n date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL, \n date_trunc( 'month', now() );\n\n now | ?column? | date_trunc \n-------------------------------+------------------------+------------------------\n 2011-07-07 16:24:38.765559+02 | 2011-06-01 00:00:00+02 | 2011-07-01 00:00:00+02\n
\nOK, we got the bounds for the "last month" datetime range.\nNow we need some window function to get the first rows per gender :
\nSELECT * FROM (\n SELECT *, rank( ) over (partition by gender order by score desc ) \n FROM (\n SELECT user_id, count(*) AS score FROM pref_rep \n WHERE nice=true \n AND last_rated >= date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL\n AND last_rated < date_trunc( 'month', now() )\n GROUP BY user_id) s1 \n JOIN users USING (user_id)) s2 \nWHERE rank=1;\n
\nNote this can give you several rows in case of ex-aequo.
\nEDIT :
\n\nI've got a nice suggestion to cast timestamps to strings in order to\n find records for the last month (not for the last 30 days)
\n
\ndate_trunc() works much better.
\nIf you make 2 queries, you'll have to make the count() twice. Since users can potentially vote many times for other users, that table will probably be the larger one, so scanning it once is a good thing.
\nYou can't "leave joining back onto the users table to the outer part of the query too" because you need genders...
\nQuery above takes about 30 ms with 1k users and 100k votes so you'd definitely want to cache it.
\n
soup wrap:
Say you run it once on the first day of the month, and cache the results, since counting votes on every page is kinda useless.
First some date arithmetic :
SELECT now(),
date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL,
date_trunc( 'month', now() );
now | ?column? | date_trunc
-------------------------------+------------------------+------------------------
2011-07-07 16:24:38.765559+02 | 2011-06-01 00:00:00+02 | 2011-07-01 00:00:00+02
OK, we got the bounds for the "last month" datetime range.
Now we need some window function to get the first rows per gender :
SELECT * FROM (
SELECT *, rank( ) over (partition by gender order by score desc )
FROM (
SELECT user_id, count(*) AS score FROM pref_rep
WHERE nice=true
AND last_rated >= date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL
AND last_rated < date_trunc( 'month', now() )
GROUP BY user_id) s1
JOIN users USING (user_id)) s2
WHERE rank=1;
Note this can give you several rows in case of ex-aequo.
EDIT :
I've got a nice suggestion to cast timestamps to strings in order to
find records for the last month (not for the last 30 days)
date_trunc() works much better.
If you make 2 queries, you'll have to make the count() twice. Since users can potentially vote many times for other users, that table will probably be the larger one, so scanning it once is a good thing.
You can't "leave joining back onto the users table to the outer part of the query too" because you need genders...
Query above takes about 30 ms with 1k users and 100k votes so you'd definitely want to cache it.
qid & accept id:
(6616800, 6616899)
query:
SQL Insert into 2 tables, passing the new PK from one table as the FK in the other
soup:
Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).
\nHere's the mysql solution (with executable test code below):
\nINSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );\nINSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());\n
\nNote: These should be executed using a single connection.
\nHere's the test code:
\ncreate table tab1 (id int auto_increment primary key, note text);\ncreate table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);\ninsert into tab1 values (null, 'row 1');\ninsert into tab2 values (null, LAST_INSERT_ID(), 'row 1');\nselect * from tab1;\nselect * from tab2;\nmysql> select * from tab1;\n+----+-------+\n| id | note |\n+----+-------+\n| 1 | row 1 |\n+----+-------+\n1 row in set (0.00 sec)\n\nmysql> select * from tab2;\n+----+---------+-------+\n| id | tab2_id | note |\n+----+---------+-------+\n| 1 | 1 | row 1 |\n+----+---------+-------+\n1 row in set (0.00 sec)\n
\n
soup wrap:
Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).
Here's the mysql solution (with executable test code below):
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );
INSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());
Note: These should be executed using a single connection.
Here's the test code:
create table tab1 (id int auto_increment primary key, note text);
create table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);
insert into tab1 values (null, 'row 1');
insert into tab2 values (null, LAST_INSERT_ID(), 'row 1');
select * from tab1;
select * from tab2;
mysql> select * from tab1;
+----+-------+
| id | note |
+----+-------+
| 1 | row 1 |
+----+-------+
1 row in set (0.00 sec)
mysql> select * from tab2;
+----+---------+-------+
| id | tab2_id | note |
+----+---------+-------+
| 1 | 1 | row 1 |
+----+---------+-------+
1 row in set (0.00 sec)
qid & accept id:
(6621502, 6623392)
query:
how to a query to match the records in two different tables and if a match update with new values, no match prompt me to fill in the details?
soup:
Here are the steps you can do:
\n Load the CSV files (using any of BCP, BULK INSERT, Import export wizard, SSIS packages) for loading tableB. This process is independent of updating tableA. \n Now for TableA create an update trigger that checks for all the SNOs present in B and NOT in A while updating it. See below DDLs and queries as example and accordingly modify: \n\n\n create table TABLEA (\n PartNo varchar(30),\n SNo varchar(30),\n PO varchar(10),\n DO varchar(30))\n\n insert into TABLEA \n select '1AB1009', 'GR7764', 'ST', 'OND'\n union\n select '1AB1009','GR7765','ST','OND'\n\n create table TABLEB ( \n SNo varchar(30)\n )\n insert into TABLEB\n select 'GR7764'\n union\n select 'GR7765'\n\n select * from TABLEA\n select * from TABLEB\n GO\n\n
\n Now create an instead of Update trigger on tableA to warn about SNOs missing in tableA when trying to insert from front end app
\n\n\n CREATE TRIGGER missingSNOs ON TABLEA\n INSTEAD OF UPDATE\n AS \n\n BEGIN\n if EXISTS (SELECT 1\n FROM TABLEB B\n LEFT OUTER JOIN\n INSERTED I\n ON B.SNO = I.SNO\n WHERE I.SNO IS NULL\n )\n begin\n SELECT B.SNO\n FROM TABLEB B\n LEFT OUTER JOIN\n INSERTED I\n ON B.SNO = I.SNO\n WHERE I.SNO IS NULL\n RAISERROR('S.nos are missing in tableA which are present in tableB',16,1);\n ROLLBACK;\n end \n END\n GO\n\n
\n Test if the trigger fires when the Sno are missing
\n\n\n-- Errors with message as the SNO is missing\nupdate TABLEA\nset PartNo = 'newPartNo'\nwhere SNO = 'SnoNOTinB'\n\n-- works no errors as both SNOS are present in tableB\nupdate TABLEA\nset PartNo = 'new one'\nwhere SNO in ('GR7764', 'GR7765')\n\n-- Also you dont have to join with tableB now and modify query as below\nUPDATE A\nset A.Mat_No ='"+ Mat_No+"',WO_No='"+WO_No+"',\nCode = '"+Code+"',Desc = '"+Desc+"',\nCenter='"+Center+"',\nDate='"+Date+"',\nRemarks='"+Remarks+"' \nFROM TableA A \nWHERE A.Status = 'IN' \n\n
\n Finally clean up the code
\n\n\n drop table TABLEA\n drop table TABLEB\n\n
\n
soup wrap:
Here are the steps you can do:
Load the CSV files (using any of BCP, BULK INSERT, Import export wizard, SSIS packages) for loading tableB. This process is independent of updating tableA.
Now for TableA create an update trigger that checks for all the SNOs present in B and NOT in A while updating it. See below DDLs and queries as example and accordingly modify:
create table TABLEA (
PartNo varchar(30),
SNo varchar(30),
PO varchar(10),
DO varchar(30))
insert into TABLEA
select '1AB1009', 'GR7764', 'ST', 'OND'
union
select '1AB1009','GR7765','ST','OND'
create table TABLEB (
SNo varchar(30)
)
insert into TABLEB
select 'GR7764'
union
select 'GR7765'
select * from TABLEA
select * from TABLEB
GO
Now create an instead of Update trigger on tableA to warn about SNOs missing in tableA when trying to insert from front end app
CREATE TRIGGER missingSNOs ON TABLEA
INSTEAD OF UPDATE
AS
BEGIN
if EXISTS (SELECT 1
FROM TABLEB B
LEFT OUTER JOIN
INSERTED I
ON B.SNO = I.SNO
WHERE I.SNO IS NULL
)
begin
SELECT B.SNO
FROM TABLEB B
LEFT OUTER JOIN
INSERTED I
ON B.SNO = I.SNO
WHERE I.SNO IS NULL
RAISERROR('S.nos are missing in tableA which are present in tableB',16,1);
ROLLBACK;
end
END
GO
Test if the trigger fires when the Sno are missing
-- Errors with message as the SNO is missing
update TABLEA
set PartNo = 'newPartNo'
where SNO = 'SnoNOTinB'
-- works no errors as both SNOS are present in tableB
update TABLEA
set PartNo = 'new one'
where SNO in ('GR7764', 'GR7765')
-- Also you dont have to join with tableB now and modify query as below
UPDATE A
set A.Mat_No ='"+ Mat_No+"',WO_No='"+WO_No+"',
Code = '"+Code+"',Desc = '"+Desc+"',
Center='"+Center+"',
Date='"+Date+"',
Remarks='"+Remarks+"'
FROM TableA A
WHERE A.Status = 'IN'
Finally clean up the code
drop table TABLEA
drop table TABLEB
qid & accept id:
(6673667, 6673781)
query:
Searching words in a database
soup:
Strictly speaking your query is correct, however what you're really looking for is "words starting with 'hyperlink'" which means there will be a space character or it will be the start of the text field.
\nselect O_ObjectID, \n rtrim(O_Name) as O_Name\nfrom A_Object\nwhere O_Name like @NamePrefix + '%' OR O_Name like '% ' + @NamePrefix + '%'\norder by O_Name\n
\nnote the added space character in '% ' + @NamePrefix + '%'
\nYour other option would be to use full text search which would mean your query would look like this:
\nselect O_ObjectID, \n rtrim(O_Name) as O_Name\nfrom A_Object\nwhere CONTAINS(O_Name, '"'+ @NamePrefix + '*"')\norder by O_Name\n
\nand performance on this will be significantly faster as it will be indexed at a word level.
\n
soup wrap:
Strictly speaking your query is correct, however what you're really looking for is "words starting with 'hyperlink'" which means there will be a space character or it will be the start of the text field.
select O_ObjectID,
rtrim(O_Name) as O_Name
from A_Object
where O_Name like @NamePrefix + '%' OR O_Name like '% ' + @NamePrefix + '%'
order by O_Name
note the added space character in '% ' + @NamePrefix + '%'
Your other option would be to use full text search which would mean your query would look like this:
select O_ObjectID,
rtrim(O_Name) as O_Name
from A_Object
where CONTAINS(O_Name, '"'+ @NamePrefix + '*"')
order by O_Name
and performance on this will be significantly faster as it will be indexed at a word level.
qid & accept id:
(6680228, 6680689)
query:
Managing Oracle Synonyms
soup:
At least up to 10g, PUBLIC is not a real user. You cannot create objects in the "Public schema":
\nSQL> CREATE TABLE public.foobar (id integer);\n\nCREATE TABLE public.foobar (id integer)\n\nORA-00903: invalid table name\n\nSQL> CREATE TABLE system.foobar (id integer);\n\nTable created\n\nSQL> \n
\nIf you run this query:
\nSELECT object_name \n FROM dba_objects \n WHERE owner='PUBLIC' \n AND object_type IN ('TABLE', 'VIEW');\n
\nYou can answer the question about pre-defined tables/views in the PUBLIC "schema".
\n
soup wrap:
At least up to 10g, PUBLIC is not a real user. You cannot create objects in the "Public schema":
SQL> CREATE TABLE public.foobar (id integer);
CREATE TABLE public.foobar (id integer)
ORA-00903: invalid table name
SQL> CREATE TABLE system.foobar (id integer);
Table created
SQL>
If you run this query:
SELECT object_name
FROM dba_objects
WHERE owner='PUBLIC'
AND object_type IN ('TABLE', 'VIEW');
You can answer the question about pre-defined tables/views in the PUBLIC "schema".
qid & accept id:
(6688196, 6689227)
query:
What is the most efficient way to concatenate a string from all parent rows using T-SQL?
soup:
To know for sure about performance you need to test. I have done some testing using your version (slightly modified) and a recursive CTE versions suggested by others.
\nI used your sample table with 2048 rows all in one single folder hierarchy so when passing 2048 as parameter to the function there are 2048 concatenations done.
\nThe loop version:
\ncreate function GetEntireLineage1 (@id int)\nreturns varchar(max)\nas\nbegin\n declare @ret varchar(max)\n\n select @ret = folder_name,\n @id = parent_id\n from Folder\n where id = @id\n\n while @@rowcount > 0\n begin\n select @ret = @ret + '-' + folder_name,\n @id = parent_id\n from Folder\n where id = @id\n end\n return @ret\nend\n
\nStatistics:
\n SQL Server Execution Times:\n CPU time = 125 ms, elapsed time = 122 ms.\n
\nThe recursive CTE version:
\ncreate function GetEntireLineage2(@id int)\nreturns varchar(max)\nbegin\n declare @ret varchar(max);\n\n with cte(id, name) as\n (\n select f.parent_id,\n cast(f.folder_name as varchar(max))\n from Folder as f\n where f.id = @id\n union all\n select f.parent_id,\n c.name + '-' + f.folder_name\n from Folder as f\n inner join cte as c\n on f.id = c.id\n )\n select @ret = name\n from cte\n where id is null\n option (maxrecursion 0)\n\n return @ret\nend\n
\nStatistics:
\n SQL Server Execution Times:\n CPU time = 187 ms, elapsed time = 183 ms.\n
\nSo between these two it is the loop version that is more efficient, at least on my test data. You need to test on your actual data to be sure.
\nEdit
\nRecursive CTE with for xml path('') trick.
\ncreate function [dbo].[GetEntireLineage4](@id int)\nreturns varchar(max)\nbegin\n declare @ret varchar(max) = '';\n\n with cte(id, lvl, name) as\n (\n select f.parent_id,\n 1,\n f.folder_name\n from Folder as f\n where f.id = @id\n union all\n select f.parent_id,\n lvl + 1,\n f.folder_name\n from Folder as f\n inner join cte as c\n on f.id = c.id\n )\n select @ret = (select '-'+name\n from cte\n order by lvl\n for xml path(''), type).value('.', 'varchar(max)')\n option (maxrecursion 0)\n\n return stuff(@ret, 1, 1, '')\nend\n
\nStatistics:
\n SQL Server Execution Times:\n CPU time = 31 ms, elapsed time = 37 ms.\n
\n
soup wrap:
To know for sure about performance you need to test. I have done some testing using your version (slightly modified) and a recursive CTE versions suggested by others.
I used your sample table with 2048 rows all in one single folder hierarchy so when passing 2048 as parameter to the function there are 2048 concatenations done.
The loop version:
create function GetEntireLineage1 (@id int)
returns varchar(max)
as
begin
declare @ret varchar(max)
select @ret = folder_name,
@id = parent_id
from Folder
where id = @id
while @@rowcount > 0
begin
select @ret = @ret + '-' + folder_name,
@id = parent_id
from Folder
where id = @id
end
return @ret
end
Statistics:
SQL Server Execution Times:
CPU time = 125 ms, elapsed time = 122 ms.
The recursive CTE version:
create function GetEntireLineage2(@id int)
returns varchar(max)
begin
declare @ret varchar(max);
with cte(id, name) as
(
select f.parent_id,
cast(f.folder_name as varchar(max))
from Folder as f
where f.id = @id
union all
select f.parent_id,
c.name + '-' + f.folder_name
from Folder as f
inner join cte as c
on f.id = c.id
)
select @ret = name
from cte
where id is null
option (maxrecursion 0)
return @ret
end
Statistics:
SQL Server Execution Times:
CPU time = 187 ms, elapsed time = 183 ms.
So between these two it is the loop version that is more efficient, at least on my test data. You need to test on your actual data to be sure.
Edit
Recursive CTE with for xml path('') trick.
create function [dbo].[GetEntireLineage4](@id int)
returns varchar(max)
begin
declare @ret varchar(max) = '';
with cte(id, lvl, name) as
(
select f.parent_id,
1,
f.folder_name
from Folder as f
where f.id = @id
union all
select f.parent_id,
lvl + 1,
f.folder_name
from Folder as f
inner join cte as c
on f.id = c.id
)
select @ret = (select '-'+name
from cte
order by lvl
for xml path(''), type).value('.', 'varchar(max)')
option (maxrecursion 0)
return stuff(@ret, 1, 1, '')
end
Statistics:
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 37 ms.
qid & accept id:
(6691865, 6691997)
query:
How do I name a column as a date value
soup:
Try this technique:
\ndeclare @dt datetime\ndeclare @sql varchar(100)\nset @dt = getdate()\nset @sql = 'select 1 as [ ' + convert( varchar(25),@dt,120) + ']' \nexec (@sql)\n
\nIn your Case:
\ndeclare @dt datetime\ndeclare @sql varchar(100)\nset @dt = getdate()\nset @sql = 'select 0 as [ ' + convert( varchar(25),@dt,120) + ']' \nexec (@sql)\n
\n
soup wrap:
Try this technique:
declare @dt datetime
declare @sql varchar(100)
set @dt = getdate()
set @sql = 'select 1 as [ ' + convert( varchar(25),@dt,120) + ']'
exec (@sql)
In your Case:
declare @dt datetime
declare @sql varchar(100)
set @dt = getdate()
set @sql = 'select 0 as [ ' + convert( varchar(25),@dt,120) + ']'
exec (@sql)
qid & accept id:
(6743541, 6743635)
query:
Condition based on column in data
soup:
It is possible if you know the number of "custom" columns in advance.
\nyou can replace
\nand table1.(value of table2.field) = 'Red'\n
\nwith
\nand case table2.field\n when 'custom1' then table1.custom1\n when 'custom2' then table1.custom2\n when 'custom3' then table1.custom2\n ...\n else NULL\n end\n = 'Red'\n
\n
soup wrap:
It is possible if you know the number of "custom" columns in advance.
you can replace
and table1.(value of table2.field) = 'Red'
with
and case table2.field
when 'custom1' then table1.custom1
when 'custom2' then table1.custom2
when 'custom3' then table1.custom2
...
else NULL
end
= 'Red'
qid & accept id:
(6745525, 6789844)
query:
Oracle logging changes to XML
soup:
I have found little work-arround:
\nFirst get a little information about table:
\nselect 'xmlelement("'|| column_name||'",new.' || column_name || '),' from all_tab_columns where lower(table_name) = 'my_table';\n
\nand just copy paste result into
\nselect xmlelement("doc",\n\n--paste here\n\n) from dual;\n
\nUgly, but working.
\n
soup wrap:
I have found little work-arround:
First get a little information about table:
select 'xmlelement("'|| column_name||'",new.' || column_name || '),' from all_tab_columns where lower(table_name) = 'my_table';
and just copy paste result into
select xmlelement("doc",
--paste here
) from dual;
Ugly, but working.
qid & accept id:
(6810923, 6812237)
query:
Oracle SQL - How do i output data from a table based on the day of the week from a hiredate column?
soup:
Hoons's answer is correct, but will only work if your Oracle session is using English language (NLS_LANGUAGE).
\nAnother query that work for all languages is
\nselect name, position, hiredate\n from table\nwhere to_char(sysdate, 'D') in (1, 2); -- 1 monday; 2 tuesday\n
\nto_char(sysdate, 'D') returns the following values for each day of week:
\n1 monday\n2 tuesday\n3 wednesday\n4 thrusday\n5 friday\n6 saturday\n7 sunday\n
\n
soup wrap:
Hoons's answer is correct, but will only work if your Oracle session is using English language (NLS_LANGUAGE).
Another query that work for all languages is
select name, position, hiredate
from table
where to_char(sysdate, 'D') in (1, 2); -- 1 monday; 2 tuesday
to_char(sysdate, 'D') returns the following values for each day of week:
1 monday
2 tuesday
3 wednesday
4 thrusday
5 friday
6 saturday
7 sunday
qid & accept id:
(6811449, 6811612)
query:
Using IFNULL to set NULLs to zero
soup:
EDIT: NEW INFO BASED ON FULL QUERY
\nThe reason the counts can be null in the query you specify is because a left join will return nulls on unmatched records. So the subquery itself is not returning null counts (hence all the responses and confusion). You need to specify the IFNULL in the outer-most select, as follows:
\nSELECT qa.*, user_profiles.*, c.*, n.pid, ifnull(n.ans_count, 0) as ans_count\nFROM qa\n JOIN user_profiles\n ON user_id = author_id\n LEFT JOIN (SELECT cm_id,\n cm_author_id,\n id_fk,\n cm_text,\n cm_timestamp,\n first_name AS cm_first_name,\n last_name AS cm_last_name,\n facebook_id AS cm_fb_id,\n picture AS cm_picture\n FROM cm\n JOIN user_profiles\n ON user_id = cm_author_id) AS c\n ON id = c.id_fk\n LEFT JOIN (SELECT parent_id AS pid, COUNT(*) AS ans_count\n FROM qa\n GROUP BY parent_id) AS n\n ON id = n.pid\nWHERE id LIKE '%'\nORDER BY id DESC\n
\nOLD RESPONSE
\nCan you explain in more detail what you are seeing and what you expect to see? Count can't return NULLs.
\nRun this set of queries and you'll see that the counts are always 2. You can change the way the NULL parent_ids are displayed (as NULL or 0), but the count itself will always return.
\ncreate temporary table if not exists SO_Test(\n parent_id int null);\n\ninsert into SO_Test(parent_id)\nselect 2 union all select 4 union all select 6 union all select null union all select null union all select 45 union all select 2;\n\n\nSELECT IFNULL(parent_id, 0) AS pid, COUNT(*) AS ans_count\n FROM SO_Test\n GROUP BY IFNULL(parent_id, 0);\n\nSELECT parent_id AS pid, COUNT(*) AS ans_count\n FROM SO_Test\n GROUP BY parent_id;\n\ndrop table SO_Test;\n
\n
soup wrap:
EDIT: NEW INFO BASED ON FULL QUERY
The reason the counts can be null in the query you specify is because a left join will return nulls on unmatched records. So the subquery itself is not returning null counts (hence all the responses and confusion). You need to specify the IFNULL in the outer-most select, as follows:
SELECT qa.*, user_profiles.*, c.*, n.pid, ifnull(n.ans_count, 0) as ans_count
FROM qa
JOIN user_profiles
ON user_id = author_id
LEFT JOIN (SELECT cm_id,
cm_author_id,
id_fk,
cm_text,
cm_timestamp,
first_name AS cm_first_name,
last_name AS cm_last_name,
facebook_id AS cm_fb_id,
picture AS cm_picture
FROM cm
JOIN user_profiles
ON user_id = cm_author_id) AS c
ON id = c.id_fk
LEFT JOIN (SELECT parent_id AS pid, COUNT(*) AS ans_count
FROM qa
GROUP BY parent_id) AS n
ON id = n.pid
WHERE id LIKE '%'
ORDER BY id DESC
OLD RESPONSE
Can you explain in more detail what you are seeing and what you expect to see? Count can't return NULLs.
Run this set of queries and you'll see that the counts are always 2. You can change the way the NULL parent_ids are displayed (as NULL or 0), but the count itself will always return.
create temporary table if not exists SO_Test(
parent_id int null);
insert into SO_Test(parent_id)
select 2 union all select 4 union all select 6 union all select null union all select null union all select 45 union all select 2;
SELECT IFNULL(parent_id, 0) AS pid, COUNT(*) AS ans_count
FROM SO_Test
GROUP BY IFNULL(parent_id, 0);
SELECT parent_id AS pid, COUNT(*) AS ans_count
FROM SO_Test
GROUP BY parent_id;
drop table SO_Test;
qid & accept id:
(6814426, 6814665)
query:
How to delete smaller records for each group?
soup:
you can write following query, if you are working in oracle -
\ndelete from item_table where rowid not in\n(\n select rowid from item_table \n where (item,price1) in (select item,max(price1) from item_table group by item)\n or (item,price2) in (select item,max(price2) from item_table group by item)\n)\n
\ni heard that rowid is not there in sql server or mysql ...\nplease tell us about your database name which one you are using.
\nyou can write as follow also..
\ndelete from item_table where (item,date,shift,price1,price2 ) not in\n (\n select item,date,shift,price1,price2 from item_table \n where (item,price1) in (select item,max(price1) from item_table group by item)\n or (item,price2) in (select item,max(price2) from item_table group by item)\n )\n
\n
soup wrap:
you can write following query, if you are working in oracle -
delete from item_table where rowid not in
(
select rowid from item_table
where (item,price1) in (select item,max(price1) from item_table group by item)
or (item,price2) in (select item,max(price2) from item_table group by item)
)
i heard that rowid is not there in sql server or mysql ...
please tell us about your database name which one you are using.
you can write as follow also..
delete from item_table where (item,date,shift,price1,price2 ) not in
(
select item,date,shift,price1,price2 from item_table
where (item,price1) in (select item,max(price1) from item_table group by item)
or (item,price2) in (select item,max(price2) from item_table group by item)
)
qid & accept id:
(6814563, 6816558)
query:
get attribute list from mongodb object
soup:
the code:
\n> db.mycoll.insert( {num:3, text:"smth", date: new Date(), childs:[1,2,3]})\n> var rec = db.mycoll.findOne();\n\n> for (key in rec) { \n var val = rec[key];\n print( key + "(" + typeof(val) + "): " + val ) }\n
\nwill print:
\n_id(object): 4e2d688cb2f2b62248c1c6bb\nnum(number): 3\ntext(string): smth\ndate(object): Mon Jul 25 2011 15:58:52 GMT+0300\nchilds(object): 1,2,3\n
\n(javascript array and date are just "object")
\nThis shows "schema" of only top level, if you want to look deeper, some recursive tree-walking function is needed.
\n
soup wrap:
the code:
> db.mycoll.insert( {num:3, text:"smth", date: new Date(), childs:[1,2,3]})
> var rec = db.mycoll.findOne();
> for (key in rec) {
var val = rec[key];
print( key + "(" + typeof(val) + "): " + val ) }
will print:
_id(object): 4e2d688cb2f2b62248c1c6bb
num(number): 3
text(string): smth
date(object): Mon Jul 25 2011 15:58:52 GMT+0300
childs(object): 1,2,3
(javascript array and date are just "object")
This shows "schema" of only top level, if you want to look deeper, some recursive tree-walking function is needed.
qid & accept id:
(6836478, 6836613)
query:
Codeigniter run query before a update
soup:
you can write your own function in the file core/MY_Model.php to do that:
\nfunction queryThenUpdate($query,$update)\n{\n $query = $this->db->query($query);\n //use as you need $query\n $this->db->update($update['table'],$update['data']);\n}\n
\nwhere:
\n\n$query is your actual query: SELECT * FROM ... \n$update is an array of two elements $update['table'] is the table to update and $update['data'] is the updating data as specified on codeigniter active record's documentation \n
\nthen make every model extend MY_Model
\nclass Your_Model extend MY_Model\n
\nand every time you need to update something:
\n$this->Your_Model->queryThenUpdate($query,$update)\n
\n
soup wrap:
you can write your own function in the file core/MY_Model.php to do that:
function queryThenUpdate($query,$update)
{
$query = $this->db->query($query);
//use as you need $query
$this->db->update($update['table'],$update['data']);
}
where:
$query is your actual query: SELECT * FROM ...
$update is an array of two elements $update['table'] is the table to update and $update['data'] is the updating data as specified on codeigniter active record's documentation
then make every model extend MY_Model
class Your_Model extend MY_Model
and every time you need to update something:
$this->Your_Model->queryThenUpdate($query,$update)
qid & accept id:
(6934563, 6934919)
query:
Lock a database or table in sqlite (Android)
soup:
Let's say SYNCHRONICED is 0 when the record is inserted or updated, 1 when the record is sent to the server, and 2 when the server has acknowledged the sync.
\nThe T1 thread should do:
\nBEGIN;\nSELECT ID, VALUE FROM TAB WHERE SYNCHRONICED = 0;\nUPDATE TAB SET SYNCHRONICED = 1 WHERE SYNCHRONICED = 0;\nCOMMIT;\n
\nThe select statement gives the records to send to the server.
\nNow any insert or update to TAB should set SYNCHRONICED = 0;
\nWhen the server responds with ack,
\nUPDATE TAB SET SYNCHRONICED = 2 WHERE SYNCHRONICED = 1;\n
\nThis will not affect any records updated or inserted since their SYNCHRONICED is 0.
\n
soup wrap:
Let's say SYNCHRONICED is 0 when the record is inserted or updated, 1 when the record is sent to the server, and 2 when the server has acknowledged the sync.
The T1 thread should do:
BEGIN;
SELECT ID, VALUE FROM TAB WHERE SYNCHRONICED = 0;
UPDATE TAB SET SYNCHRONICED = 1 WHERE SYNCHRONICED = 0;
COMMIT;
The select statement gives the records to send to the server.
Now any insert or update to TAB should set SYNCHRONICED = 0;
When the server responds with ack,
UPDATE TAB SET SYNCHRONICED = 2 WHERE SYNCHRONICED = 1;
This will not affect any records updated or inserted since their SYNCHRONICED is 0.
qid & accept id:
(6937080, 6937175)
query:
how to add primary key to table having duplicate values?
soup:
Add PK as AUTO_INCREMENT, it will change all 0 values automatically -
\nALTER TABLE table_a\n CHANGE COLUMN id id INT(11) NOT NULL AUTO_INCREMENT,\n ADD PRIMARY KEY (id);\n
\nAfter, AUTO_INCREMENT property can be removed -
\nALTER TABLE table_a\n CHANGE COLUMN id id INT(11) NOT NULL;\n
\n
soup wrap:
Add PK as AUTO_INCREMENT, it will change all 0 values automatically -
ALTER TABLE table_a
CHANGE COLUMN id id INT(11) NOT NULL AUTO_INCREMENT,
ADD PRIMARY KEY (id);
After, AUTO_INCREMENT property can be removed -
ALTER TABLE table_a
CHANGE COLUMN id id INT(11) NOT NULL;
qid & accept id:
(6994843, 6994915)
query:
MySQL query where JOIN depends on CASE
soup:
It probably needs tweaking to return the correct results but I hope you get the idea:
\nSELECT ft1.task, COUNT(ft1.id) AS count\nFROM feed_tasks ft1\nLEFT JOIN pages p1 ON ft1.type=1 AND p1.id = ft1.reference_id\nLEFT JOIN urls u1 ON ft1.type=2 AND u1.id = ft1.reference_id\nWHERE COALESCE(p1.id, u1.id) IS NOT NULL\nAND ft1.account_id IS NOT NULL\nAND a1.user_id = :user_id\n
\nEdit:
\nA little note about CASE...END. Your original code does not run because, unlike PHP or JavaScript, the SQL CASE is not a flow control structure that allows to choose which part of the code will run. Instead, it returns an expression. So you can do this:
\nSELECT CASE\n WHEN foo<0 THEN 'Yes'\n ELSE 'No'\nEND AS is_negative\nFROM bar\n
\n... but not this:
\n-- Invalid\nCASE \n WHEN foo<0 THEN SELECT 'Yes' AS is_negative\n ELSE SELECT 'No' AS is_negative\nEND\nFROM bar\n
\n
soup wrap:
It probably needs tweaking to return the correct results but I hope you get the idea:
SELECT ft1.task, COUNT(ft1.id) AS count
FROM feed_tasks ft1
LEFT JOIN pages p1 ON ft1.type=1 AND p1.id = ft1.reference_id
LEFT JOIN urls u1 ON ft1.type=2 AND u1.id = ft1.reference_id
WHERE COALESCE(p1.id, u1.id) IS NOT NULL
AND ft1.account_id IS NOT NULL
AND a1.user_id = :user_id
Edit:
A little note about CASE...END. Your original code does not run because, unlike PHP or JavaScript, the SQL CASE is not a flow control structure that allows to choose which part of the code will run. Instead, it returns an expression. So you can do this:
SELECT CASE
WHEN foo<0 THEN 'Yes'
ELSE 'No'
END AS is_negative
FROM bar
... but not this:
-- Invalid
CASE
WHEN foo<0 THEN SELECT 'Yes' AS is_negative
ELSE SELECT 'No' AS is_negative
END
FROM bar
qid & accept id:
(7008452, 7008500)
query:
How do I select any value from SP?
soup:
You execute the stored procedure.
\nexec MySP\n
\nResult:
\n(No column name)\n2011-08-10 00:00:00.000\n
\nEdit
\nStored procedure with output parameter @startdate
\nalter PROCEDURE MySP\n(\n@startdate datetime = null out,\n@enddate datetime = null\n)\nAS\nBEGIN\n declare @date datetime \n Set @date= convert(datetime,convert(varchar(10),getdate(),101))\n SET @startdate = ISNULL(@startdate,convert (datetime,convert(varchar(10),getdate(),101)))\nEND\n
\nUse like this
\ndeclare @D datetime\nexec MySP @D out\nselect @D\n
\n
soup wrap:
You execute the stored procedure.
exec MySP
Result:
(No column name)
2011-08-10 00:00:00.000
Edit
Stored procedure with output parameter @startdate
alter PROCEDURE MySP
(
@startdate datetime = null out,
@enddate datetime = null
)
AS
BEGIN
declare @date datetime
Set @date= convert(datetime,convert(varchar(10),getdate(),101))
SET @startdate = ISNULL(@startdate,convert (datetime,convert(varchar(10),getdate(),101)))
END
Use like this
declare @D datetime
exec MySP @D out
select @D
qid & accept id:
(7112526, 7112793)
query:
Checking the value of a field and updating it
soup:
One way to find such rows (or tuples) would be a query like:
\nSELECT job_num, item_code, invoice_num\nFROM tablename\nWHERE job_num = 94834 AND item_code = "EFC-ASSOC-01" AND invoice_num = ""\n
\nor follow @Ben's advice if the empty string is a problem. Then you can do an update:
\nUPDATE tablename SET invoice_num = ? WHERE job_num = .........\n
\nHowever, the problem with this approach is that if you're not using the primary key to choose a row in the update statement, multiple rows could get updated (similarly, the select statement could return multiple rows). So, you'll have to look at the database schema and determine the primary key column(s) of the table, and make sure that all of the primary key columns are used in the WHERE clause of the update. If you just do
\nUPDATE tablename SET invoice_num = value WHERE invoice_num = ""\n
\nall rows with that value of invoice_num will be updated, which may not be what you want.
\n
soup wrap:
One way to find such rows (or tuples) would be a query like:
SELECT job_num, item_code, invoice_num
FROM tablename
WHERE job_num = 94834 AND item_code = "EFC-ASSOC-01" AND invoice_num = ""
or follow @Ben's advice if the empty string is a problem. Then you can do an update:
UPDATE tablename SET invoice_num = ? WHERE job_num = .........
However, the problem with this approach is that if you're not using the primary key to choose a row in the update statement, multiple rows could get updated (similarly, the select statement could return multiple rows). So, you'll have to look at the database schema and determine the primary key column(s) of the table, and make sure that all of the primary key columns are used in the WHERE clause of the update. If you just do
UPDATE tablename SET invoice_num = value WHERE invoice_num = ""
all rows with that value of invoice_num will be updated, which may not be what you want.
qid & accept id:
(7116576, 7123003)
query:
SQL find consecutive quarters
soup:
First of all, your data model is making it hard for you. You need an easy way to spot consecutive quarters, So, you need a table to hold that information, with a key which is a rising increment: how else do you expect the computer to know that Spring 2009 follows Winter 2008?
\nAnyway, here's my version of your test data. I'm using names to make it easier to see what's going on:
\nSQL> select s.name as student\n 2 , c.name as class\n 3 , q.season||' '||q.year as quarter\n 4 , q.q_id\n 5 , c.base_cost\n 6 from enrolments e\n 7 join students s\n 8 on (s.s_id = e.s_id)\n 9 join classes c\n 10 on (c.c_id = e.c_id)\n 11 join quarters q\n 12 on (q.q_id = c.q_id)\n 13 order by s.s_id, q.q_id\n 14 /\n\nSTUDENT CLASS QUARTER Q_ID BASE_COST\n---------- -------------------- --------------- ---------- ----------\nSheldon Introduction to SQL Spring 2008 100 100\nSheldon Advanced SQL Spring 2009 104 150\nHoward Introduction to SQL Spring 2008 100 100\nHoward Information Theory Summer 2008 101 75\nRajesh Information Theory Summer 2008 101 75\nLeonard Crypto Foundation Autumn 2008 102 120\nLeonard PHP for Dummies Winter 2008 103 90\nLeonard Advanced SQL Spring 2009 104 150\n\n8 rows selected.\n\nSQL>\n
\nAs you can see, I have got a table QUARTERS whose primary key Q_ID increments by one in calendrical order.
\nI'm going to use Oracle syntax to solve this, specifically the LAG analytic function:
\nSQL> select s.name as student\n 2 , c.name as class\n 3 , q.season||' '||q.year as quarter\n 4 , q.q_id\n 5 , c.base_cost\n 6 , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id\n 7 from enrolments e\n 8 join students s\n 9 on (s.s_id = e.s_id)\n 10 join classes c\n 11 on (c.c_id = e.c_id)\n 12 join quarters q\n 13 on (q.q_id = c.q_id)\n 14 order by s.s_id, q.q_id\n 15 /\n\nSTUDENT CLASS QUARTER Q_ID BASE_COST PREV_Q_ID\n---------- -------------------- --------------- ---------- ---------- ----------\nSheldon Introduction to SQL Spring 2008 100 100\nSheldon Advanced SQL Spring 2009 104 150 100\nHoward Introduction to SQL Spring 2008 100 100\nHoward Information Theory Summer 2008 101 75 100\nRajesh Information Theory Summer 2008 101 75\nLeonard Crypto Foundation Autumn 2008 102 120\nLeonard PHP for Dummies Winter 2008 103 90 102\nLeonard Advanced SQL Spring 2009 104 150 103\n\n8 rows selected.\n\nSQL>\n
\nSo, by looking in the PREV_Q_ID columns we can see that Howard, Sheldon and Leonard have each taken more than one course. Only Leonard has taken three courses. By comparing the values in the PREV_Q_ID and Q_ID columns we can see that Howard's two courses are in consective quarters, whereas Sheldon's are not.
\nNow we can do some maths:
\nSQL> select student\n 2 , class\n 3 , quarter\n 4 , base_cost\n 5 , discount*100 as discount_pct\n 6 , base_cost - (base_cost*discount) as actual_cost\n 7 from\n 8 ( select student\n 9 , class\n 10 , quarter\n 11 , base_cost\n 12 , case\n 13 when prev_q_id is not null\n 14 and q_id - prev_q_id = 1\n 15 then 0.2\n 16 else 0\n 17 end as discount\n 18 , s_id\n 19 , q_id\n 20 from\n 21 (\n 22 select s.name as student\n 23 , c.name as class\n 24 , q.season||' '||q.year as quarter\n 25 , q.q_id\n 26 , c.base_cost\n 27 , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id\n 28 , s.s_id\n 29 from enrolments e\n 30 join students s\n 31 on (s.s_id = e.s_id)\n 32 join classes c\n 33 on (c.c_id = e.c_id)\n 34 join quarters q\n 35 on (q.q_id = c.q_id)\n 36 )\n 37 )\n 38 order by s_id, q_id\n 39 /\n
\n(artifical break to obviate the need to scroll down to see the results)
\nSTUDENT CLASS QUARTER BASE_COST DISCOUNT_PCT ACTUAL_COST\n---------- -------------------- ----------- ---------- ------------ -----------\nSheldon Introduction to SQL Spring 2008 100 0 100\nSheldon Advanced SQL Spring 2009 150 0 150\nHoward Introduction to SQL Spring 2008 100 0 100\nHoward Information Theory Summer 2008 75 20 60\nRajesh Information Theory Summer 2008 75 0 75\nLeonard Crypto Foundation Autumn 2008 120 0 120\nLeonard PHP for Dummies Winter 2008 90 20 72\nLeonard Advanced SQL Spring 2009 150 20 120\n\n8 rows selected.\n\nSQL>\n
\nSo, Howard and Leonard get discounts for their consecutive classes, and Sheldon and Raj don't.
\n
soup wrap:
First of all, your data model is making it hard for you. You need an easy way to spot consecutive quarters, So, you need a table to hold that information, with a key which is a rising increment: how else do you expect the computer to know that Spring 2009 follows Winter 2008?
Anyway, here's my version of your test data. I'm using names to make it easier to see what's going on:
SQL> select s.name as student
2 , c.name as class
3 , q.season||' '||q.year as quarter
4 , q.q_id
5 , c.base_cost
6 from enrolments e
7 join students s
8 on (s.s_id = e.s_id)
9 join classes c
10 on (c.c_id = e.c_id)
11 join quarters q
12 on (q.q_id = c.q_id)
13 order by s.s_id, q.q_id
14 /
STUDENT CLASS QUARTER Q_ID BASE_COST
---------- -------------------- --------------- ---------- ----------
Sheldon Introduction to SQL Spring 2008 100 100
Sheldon Advanced SQL Spring 2009 104 150
Howard Introduction to SQL Spring 2008 100 100
Howard Information Theory Summer 2008 101 75
Rajesh Information Theory Summer 2008 101 75
Leonard Crypto Foundation Autumn 2008 102 120
Leonard PHP for Dummies Winter 2008 103 90
Leonard Advanced SQL Spring 2009 104 150
8 rows selected.
SQL>
As you can see, I have got a table QUARTERS whose primary key Q_ID increments by one in calendrical order.
I'm going to use Oracle syntax to solve this, specifically the LAG analytic function:
SQL> select s.name as student
2 , c.name as class
3 , q.season||' '||q.year as quarter
4 , q.q_id
5 , c.base_cost
6 , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id
7 from enrolments e
8 join students s
9 on (s.s_id = e.s_id)
10 join classes c
11 on (c.c_id = e.c_id)
12 join quarters q
13 on (q.q_id = c.q_id)
14 order by s.s_id, q.q_id
15 /
STUDENT CLASS QUARTER Q_ID BASE_COST PREV_Q_ID
---------- -------------------- --------------- ---------- ---------- ----------
Sheldon Introduction to SQL Spring 2008 100 100
Sheldon Advanced SQL Spring 2009 104 150 100
Howard Introduction to SQL Spring 2008 100 100
Howard Information Theory Summer 2008 101 75 100
Rajesh Information Theory Summer 2008 101 75
Leonard Crypto Foundation Autumn 2008 102 120
Leonard PHP for Dummies Winter 2008 103 90 102
Leonard Advanced SQL Spring 2009 104 150 103
8 rows selected.
SQL>
So, by looking in the PREV_Q_ID columns we can see that Howard, Sheldon and Leonard have each taken more than one course. Only Leonard has taken three courses. By comparing the values in the PREV_Q_ID and Q_ID columns we can see that Howard's two courses are in consective quarters, whereas Sheldon's are not.
Now we can do some maths:
SQL> select student
2 , class
3 , quarter
4 , base_cost
5 , discount*100 as discount_pct
6 , base_cost - (base_cost*discount) as actual_cost
7 from
8 ( select student
9 , class
10 , quarter
11 , base_cost
12 , case
13 when prev_q_id is not null
14 and q_id - prev_q_id = 1
15 then 0.2
16 else 0
17 end as discount
18 , s_id
19 , q_id
20 from
21 (
22 select s.name as student
23 , c.name as class
24 , q.season||' '||q.year as quarter
25 , q.q_id
26 , c.base_cost
27 , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id
28 , s.s_id
29 from enrolments e
30 join students s
31 on (s.s_id = e.s_id)
32 join classes c
33 on (c.c_id = e.c_id)
34 join quarters q
35 on (q.q_id = c.q_id)
36 )
37 )
38 order by s_id, q_id
39 /
(artifical break to obviate the need to scroll down to see the results)
STUDENT CLASS QUARTER BASE_COST DISCOUNT_PCT ACTUAL_COST
---------- -------------------- ----------- ---------- ------------ -----------
Sheldon Introduction to SQL Spring 2008 100 0 100
Sheldon Advanced SQL Spring 2009 150 0 150
Howard Introduction to SQL Spring 2008 100 0 100
Howard Information Theory Summer 2008 75 20 60
Rajesh Information Theory Summer 2008 75 0 75
Leonard Crypto Foundation Autumn 2008 120 0 120
Leonard PHP for Dummies Winter 2008 90 20 72
Leonard Advanced SQL Spring 2009 150 20 120
8 rows selected.
SQL>
So, Howard and Leonard get discounts for their consecutive classes, and Sheldon and Raj don't.
qid & accept id:
(7246987, 7247630)
query:
How to properly index tables used in a query with multiple joins
soup:
Note: SQL Server is what I use. If you're using something else - this may not apply.\nAlso note: I'm going to discuss indexes to aid in accessing data from a table. Covering indexes are a separate topic that I am not addressing here.
\nWhen accessing a table, there's 3 ways to do it.
\n\n- Use Filtering Criteria.
\n- Use Relational Criteria from rows already read.
\n- Read the Whole Table!
\n
\nI started by making a list of all tables, with filtering criteria and relational criteria.
\narticles\n\n articles.expirydate > 'somedate'\n articles.dateadded > 'somedate'\n articles.status >= someint\n\n articles.article_id <-> articles_to_geo.article_id\n articles.article_id <-> articles_to_badges.article_id\n articles.site_id <-> sites.id\n\narticles_to_geo\n\n articles_to_geo.article_id <-> articles.article_id\n articles_to_geo.whitelist_city_id <-> cities_whitelist.city_id\n\ncities_whitelist\n\n cities_whitelist.published = someint\n\n cities_whitelist.city_id <-> articles_to_geo.whitelist_city_id\n cities_whiltelist.city_id <-> cities.city_id\n\ncities\n\n cities.city_id <-> cities_whiltelist.city_id\n\narticles_to_badges\n\n articles_to_badges.badge_id in (some ids)\n\n articles_to_badges.article_id <-> articles.article_id\n article_to_badges.badge_id <-> badges.id\n\nbadges\n\n badges.id <-> article_to_badges.badge_id\n\nsites\n\n sites.id <-> articles.site_id\n
\nThe clumsiest way to approach this is to simply make an index on each table that supports each relational and filtering critera... then let the optimizer choose which indexes it wants to use. This approach is great for IO performance, and simple to do... but it costs a lot of space in un-used indexes.
\nThe next best way is to run the query with these options turned on:
\nSET STATISTICS IO ON\nSET STATISTICS TIME ON\n
\nIf a particular set of tables is using more IO, indexing efforts can be focused on them. To do this relies on the optimizer plan for the order in which the tables are access to already be pretty good.
\n
\nIf the optimizer can't make a good plan at all because of the lack of indexes, what I do is figure out which order I'd like the tables to be accessed, then add indexes that support those accesses.
\nNote: the first table accessed does not have the option of using relational criteria, as no records are yet read. First table must be accessed by Filtering Criteria or Read the Whole Table.
\nOne possible order is the order in the query. This approach might be pretty bad because our Articles Filtering Criteria is based on 3 different ranges. There could be thousands of articles that meet that criteria and it's hard to formulate an index to support those ranges.
\nArticles (Filter)\n Articles_to_Geo (Relational by Article_Id)\n Cities_WhiteList (Relational by City_Id) (Filter)\n Cities (Relational by City_Id) (Filter)\n Articles_to_Badges (Relational by Article_Id) (Filter)\n Badges (Relational by Badge_Id)\n Sites (Relational by Article_Id)\n
\nAnother possible order is Cities first. The Criteria for Cities is easily indexable and there might only be 1 row! Finding the articles for a City and then filtering by date should read fewer rows than finding the articles for dates and then filtering down to the City.
\nCities (Filter)\n Cities_WhiteList (Relational by City_Id) (Filter)\n Articles_to_Geo (Relational by City_Id)\n Articles (Relational by Article_Id) (Filter)\n Articles_to_Badges (Relational by Article_Id) (Filter)\n Badges (Relational by Badge_Id)\n Sites (Relational by Article_Id)\n
\nA third approach could be Badges first. This would be best if articles rarely accumulate Badges and there aren't many Badges.
\nBadges (Read the Whole Table)\n Articles_to_Badges (Relational by Badge_Id) (Filter)\n Articles (Relational by Article_Id) (Filter)\n Articles_to_Geo (Relational by Article_Id)\n Cities_WhiteList (Relational by City_Id) (Filter)\n Cities (Relational by City_Id) (Filter)\n Sites (Relational by Article_Id)\n
\n
soup wrap:
Note: SQL Server is what I use. If you're using something else - this may not apply.
Also note: I'm going to discuss indexes to aid in accessing data from a table. Covering indexes are a separate topic that I am not addressing here.
When accessing a table, there's 3 ways to do it.
- Use Filtering Criteria.
- Use Relational Criteria from rows already read.
- Read the Whole Table!
I started by making a list of all tables, with filtering criteria and relational criteria.
articles
articles.expirydate > 'somedate'
articles.dateadded > 'somedate'
articles.status >= someint
articles.article_id <-> articles_to_geo.article_id
articles.article_id <-> articles_to_badges.article_id
articles.site_id <-> sites.id
articles_to_geo
articles_to_geo.article_id <-> articles.article_id
articles_to_geo.whitelist_city_id <-> cities_whitelist.city_id
cities_whitelist
cities_whitelist.published = someint
cities_whitelist.city_id <-> articles_to_geo.whitelist_city_id
cities_whiltelist.city_id <-> cities.city_id
cities
cities.city_id <-> cities_whiltelist.city_id
articles_to_badges
articles_to_badges.badge_id in (some ids)
articles_to_badges.article_id <-> articles.article_id
article_to_badges.badge_id <-> badges.id
badges
badges.id <-> article_to_badges.badge_id
sites
sites.id <-> articles.site_id
The clumsiest way to approach this is to simply make an index on each table that supports each relational and filtering critera... then let the optimizer choose which indexes it wants to use. This approach is great for IO performance, and simple to do... but it costs a lot of space in un-used indexes.
The next best way is to run the query with these options turned on:
SET STATISTICS IO ON
SET STATISTICS TIME ON
If a particular set of tables is using more IO, indexing efforts can be focused on them. To do this relies on the optimizer plan for the order in which the tables are access to already be pretty good.
If the optimizer can't make a good plan at all because of the lack of indexes, what I do is figure out which order I'd like the tables to be accessed, then add indexes that support those accesses.
Note: the first table accessed does not have the option of using relational criteria, as no records are yet read. First table must be accessed by Filtering Criteria or Read the Whole Table.
One possible order is the order in the query. This approach might be pretty bad because our Articles Filtering Criteria is based on 3 different ranges. There could be thousands of articles that meet that criteria and it's hard to formulate an index to support those ranges.
Articles (Filter)
Articles_to_Geo (Relational by Article_Id)
Cities_WhiteList (Relational by City_Id) (Filter)
Cities (Relational by City_Id) (Filter)
Articles_to_Badges (Relational by Article_Id) (Filter)
Badges (Relational by Badge_Id)
Sites (Relational by Article_Id)
Another possible order is Cities first. The Criteria for Cities is easily indexable and there might only be 1 row! Finding the articles for a City and then filtering by date should read fewer rows than finding the articles for dates and then filtering down to the City.
Cities (Filter)
Cities_WhiteList (Relational by City_Id) (Filter)
Articles_to_Geo (Relational by City_Id)
Articles (Relational by Article_Id) (Filter)
Articles_to_Badges (Relational by Article_Id) (Filter)
Badges (Relational by Badge_Id)
Sites (Relational by Article_Id)
A third approach could be Badges first. This would be best if articles rarely accumulate Badges and there aren't many Badges.
Badges (Read the Whole Table)
Articles_to_Badges (Relational by Badge_Id) (Filter)
Articles (Relational by Article_Id) (Filter)
Articles_to_Geo (Relational by Article_Id)
Cities_WhiteList (Relational by City_Id) (Filter)
Cities (Relational by City_Id) (Filter)
Sites (Relational by Article_Id)
qid & accept id:
(7260488, 7261547)
query:
How can I get a single result from a related table in SQL?
soup:
Since you're using MySQL, I'll give you a MySQL-specific solution that's really easy:
\nSELECT \n gallery.id, \n gallery.thumbnail_big, \n products.id, \n products.title, \n products.size, \n products.price, \n products.text_description, \n products.main_description \nFROM gallery\nINNER JOIN products \nON gallery.id=products.id\nGROUP BY products.id\n
\nOf course this returns an arbitrary gallery.id and thumbnail_big, but you haven't specified which one you want. In practice, it'll be the one that's stored first physically in the table, but you have little control over this.
\nThe query above is ambiguous, so it wouldn't be allowed by ANSI SQL and most brands of RDBMS. But MySQL allows it (SQLite does too, for what it's worth).
\nThe better solution is to make the query not ambiguous. For instance, if you want to fetch the gallery image that has the highest primary key value:
\nSELECT \n g1.id, \n g1.thumbnail_big, \n p.id, \n p.title, \n p.size, \n p.price, \n p.text_description, \n p.main_description \nFROM products p\nINNER JOIN gallery g1 ON p.id = g1.id\nLEFT OUTER JOIN gallery g2 ON p.id = g2.id AND g1.pkey < g2.pkey\nWHERE g2.id IS NULL\n
\nI have to assume you have another column gallery.pkey that is auto-increment, or otherwise serves to uniquely distinguish gallery images for a given product. If you don't have such a column, you need to create one.
\nThen the query tries to find a row g2 for the same product, that is greater than g1. If no such row exists, then g1 must be the greatest row.
\n
soup wrap:
Since you're using MySQL, I'll give you a MySQL-specific solution that's really easy:
SELECT
gallery.id,
gallery.thumbnail_big,
products.id,
products.title,
products.size,
products.price,
products.text_description,
products.main_description
FROM gallery
INNER JOIN products
ON gallery.id=products.id
GROUP BY products.id
Of course this returns an arbitrary gallery.id and thumbnail_big, but you haven't specified which one you want. In practice, it'll be the one that's stored first physically in the table, but you have little control over this.
The query above is ambiguous, so it wouldn't be allowed by ANSI SQL and most brands of RDBMS. But MySQL allows it (SQLite does too, for what it's worth).
The better solution is to make the query not ambiguous. For instance, if you want to fetch the gallery image that has the highest primary key value:
SELECT
g1.id,
g1.thumbnail_big,
p.id,
p.title,
p.size,
p.price,
p.text_description,
p.main_description
FROM products p
INNER JOIN gallery g1 ON p.id = g1.id
LEFT OUTER JOIN gallery g2 ON p.id = g2.id AND g1.pkey < g2.pkey
WHERE g2.id IS NULL
I have to assume you have another column gallery.pkey that is auto-increment, or otherwise serves to uniquely distinguish gallery images for a given product. If you don't have such a column, you need to create one.
Then the query tries to find a row g2 for the same product, that is greater than g1. If no such row exists, then g1 must be the greatest row.
qid & accept id:
(7270243, 7273455)
query:
How to localize database table
soup:
I recommend going with the second option, although you appear to have some data-typos.
\nCountry:
\nId Code\n===============\n1 IT\n
\nLocalized_Country:
\nCountryId LanguageCode LocalizedName\n=========================================\n1 IT Italia\n1 EN Italy\n
\nWhich you then query like so:
\nSELECT a.Id, b.LocalizedName\nFROM Country as a\nJOIN Localized_Country as b\nON b.CountryId = a.Id\nAND b.LanguageCode = :InputLanguageCode\nWHERE a.Code = :InputInternationalCountryCode\n
\nWrap that (or something similar) up in a view, and you're golden.
\nSome recommendations:
\nYou may want to push Language (or some other type of Locale concept) into it's own table. The key can either be an auto-increment value, or the international characters, doesn't much matter which.
\nMake sure to put a unique constraint on (CountryId, LanguageCode), just in case. And never forget your foreign keys.
\n
soup wrap:
I recommend going with the second option, although you appear to have some data-typos.
Country:
Id Code
===============
1 IT
Localized_Country:
CountryId LanguageCode LocalizedName
=========================================
1 IT Italia
1 EN Italy
Which you then query like so:
SELECT a.Id, b.LocalizedName
FROM Country as a
JOIN Localized_Country as b
ON b.CountryId = a.Id
AND b.LanguageCode = :InputLanguageCode
WHERE a.Code = :InputInternationalCountryCode
Wrap that (or something similar) up in a view, and you're golden.
Some recommendations:
You may want to push Language (or some other type of Locale concept) into it's own table. The key can either be an auto-increment value, or the international characters, doesn't much matter which.
Make sure to put a unique constraint on (CountryId, LanguageCode), just in case. And never forget your foreign keys.
qid & accept id:
(7274514, 7274691)
query:
SQL query to match keywords?
soup:
Yes, possible with full text search, and likely the best answer. For a straight T-SQL solution, you could use a split function and join, e.g. assuming a table of numbers called dbo.Numbers (you may need to decide on a different upper limit):
\nSET NOCOUNT ON;\nDECLARE @UpperLimit INT;\nSET @UpperLimit = 200000;\n\nWITH n AS\n(\n SELECT\n rn = ROW_NUMBER() OVER\n (ORDER BY s1.[object_id])\n FROM sys.objects AS s1\n CROSS JOIN sys.objects AS s2\n CROSS JOIN sys.objects AS s3\n)\nSELECT [Number] = rn - 1\nINTO dbo.Numbers\nFROM n\nWHERE rn <= @UpperLimit + 1;\n\nCREATE UNIQUE CLUSTERED INDEX n ON dbo.Numbers([Number]);\n
\nAnd a splitting function that uses that table of numbers:
\nCREATE FUNCTION dbo.SplitStrings\n(\n @List NVARCHAR(MAX)\n)\nRETURNS TABLE\nAS\n RETURN\n (\n SELECT DISTINCT\n [Value] = LTRIM(RTRIM(\n SUBSTRING(@List, [Number],\n CHARINDEX(N',', @List + N',', [Number]) - [Number])))\n FROM\n dbo.Numbers\n WHERE\n Number <= LEN(@List)\n AND SUBSTRING(N',' + @List, [Number], 1) = N','\n );\nGO\n
\nThen you can simply say:
\nSELECT key, NvarcharColumn /*, other cols */\nFROM dbo.table AS outerT\nWHERE EXISTS\n(\n SELECT 1 \n FROM dbo.table AS t \n INNER JOIN dbo.SplitStrings(N'list,of,words') AS s\n ON t.NvarcharColumn LIKE '%' + s.Item + '%'\n WHERE t.key = outerT.key\n);\n
\nAs a procedure:
\nCREATE PROCEDURE dbo.Search\n @List NVARCHAR(MAX)\nAS\nBEGIN\n SET NOCOUNT ON;\n\n SELECT key, NvarcharColumn /*, other cols */\n FROM dbo.table AS outerT\n WHERE EXISTS\n (\n SELECT 1 \n FROM dbo.table AS t \n INNER JOIN dbo.SplitStrings(@List) AS s\n ON t.NvarcharColumn LIKE '%' + s.Item + '%'\n WHERE t.key = outerT.key\n );\nEND\nGO\n
\nThen you can just pass in @List (e.g. EXEC dbo.Search @List = N'foo,bar,splunge') from C#.
\nThis won't be super fast, but I'm sure it will be quicker than pulling all the data out into C# and double-nested loop it manually.
\n
soup wrap:
Yes, possible with full text search, and likely the best answer. For a straight T-SQL solution, you could use a split function and join, e.g. assuming a table of numbers called dbo.Numbers (you may need to decide on a different upper limit):
SET NOCOUNT ON;
DECLARE @UpperLimit INT;
SET @UpperLimit = 200000;
WITH n AS
(
SELECT
rn = ROW_NUMBER() OVER
(ORDER BY s1.[object_id])
FROM sys.objects AS s1
CROSS JOIN sys.objects AS s2
CROSS JOIN sys.objects AS s3
)
SELECT [Number] = rn - 1
INTO dbo.Numbers
FROM n
WHERE rn <= @UpperLimit + 1;
CREATE UNIQUE CLUSTERED INDEX n ON dbo.Numbers([Number]);
And a splitting function that uses that table of numbers:
CREATE FUNCTION dbo.SplitStrings
(
@List NVARCHAR(MAX)
)
RETURNS TABLE
AS
RETURN
(
SELECT DISTINCT
[Value] = LTRIM(RTRIM(
SUBSTRING(@List, [Number],
CHARINDEX(N',', @List + N',', [Number]) - [Number])))
FROM
dbo.Numbers
WHERE
Number <= LEN(@List)
AND SUBSTRING(N',' + @List, [Number], 1) = N','
);
GO
Then you can simply say:
SELECT key, NvarcharColumn /*, other cols */
FROM dbo.table AS outerT
WHERE EXISTS
(
SELECT 1
FROM dbo.table AS t
INNER JOIN dbo.SplitStrings(N'list,of,words') AS s
ON t.NvarcharColumn LIKE '%' + s.Item + '%'
WHERE t.key = outerT.key
);
As a procedure:
CREATE PROCEDURE dbo.Search
@List NVARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
SELECT key, NvarcharColumn /*, other cols */
FROM dbo.table AS outerT
WHERE EXISTS
(
SELECT 1
FROM dbo.table AS t
INNER JOIN dbo.SplitStrings(@List) AS s
ON t.NvarcharColumn LIKE '%' + s.Item + '%'
WHERE t.key = outerT.key
);
END
GO
Then you can just pass in @List (e.g. EXEC dbo.Search @List = N'foo,bar,splunge') from C#.
This won't be super fast, but I'm sure it will be quicker than pulling all the data out into C# and double-nested loop it manually.
qid & accept id:
(7278905, 7281278)
query:
Efficiently find top-N values from multiple columns independently in Oracle
soup:
This should only do one pass over the table. You can use the analytic version of count() to get the frequency of each value independently:
\nselect firstname, count(*) over (partition by firstname) as c_fn,\n lastname, count(*) over (partition by lastname) as c_ln,\n favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,\n favoritebook, count(*) over (partition by favoritebook) as c_fb\nfrom my_table;\n\nFIRSTN C_FN LASTNAME C_LN FAVORIT C_FA FAVORITEBOOK C_FB\n------ ---- -------- ---- ------- ---- ------------ ----\nBill 1 Ribbits 1 Lemur 2 Dhalgren 1\nFerris 1 Freemont 2 Possum 1 Ubik 2\nNancy 2 Freemont 2 Lemur 2 Housekeeping 1\nNancy 2 Drew 1 Penguin 1 Ubik 2\n
\nYou can then use that as a CTE (or subquery factoring, I think in oracle terminology) and pull only the highest-frequency value from each column:
\nwith tmp_tab as (\n select /*+ MATERIALIZE */\n firstname, count(*) over (partition by firstname) as c_fn,\n lastname, count(*) over (partition by lastname) as c_ln,\n favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,\n favoritebook, count(*) over (partition by favoritebook) as c_fb\n from my_table)\nselect (select firstname from (\n select firstname,\n row_number() over (partition by null order by c_fn desc) as r_fn\n from tmp_tab\n ) where r_fn = 1) as firstname,\n (select lastname from (\n select lastname,\n row_number() over (partition by null order by c_ln desc) as r_ln\n from tmp_tab\n ) where r_ln = 1) as lastname,\n (select favoriteanimal from (\n select favoriteanimal,\n row_number() over (partition by null order by c_fa desc) as r_fa\n from tmp_tab\n ) where r_fa = 1) as favoriteanimal,\n (select favoritebook from (\n select favoritebook,\n row_number() over (partition by null order by c_fb desc) as r_fb\n from tmp_tab\n ) where r_fb = 1) as favoritebook\nfrom dual;\n\nFIRSTN LASTNAME FAVORIT FAVORITEBOOK\n------ -------- ------- ------------\nNancy Freemont Lemur Ubik\n
\nYou're doing one pass over the CTE for each column, but that should still only hit the real table once (thanks to the materialize hint). And you may want to add to the order by clauses to tweak what do to if there are ties.
\nThis is similar in concept to what Thilo, ysth and others have suggested, except you're letting Oracle keep track of all the counting.
\nEdit: Hmm, explain plan shows it doing four full table scans; may need to think about this a bit more...\nEdit 2: Adding the (undocumented) MATERIALIZE hint to the CTE seems to resolve this; it's creating a transient temporary table to hold the results, and only does one full table scan. The explain plan cost is higher though - at least on this time sample data set. Be interested in any comments on any downside to doing this.
\n
soup wrap:
This should only do one pass over the table. You can use the analytic version of count() to get the frequency of each value independently:
select firstname, count(*) over (partition by firstname) as c_fn,
lastname, count(*) over (partition by lastname) as c_ln,
favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
favoritebook, count(*) over (partition by favoritebook) as c_fb
from my_table;
FIRSTN C_FN LASTNAME C_LN FAVORIT C_FA FAVORITEBOOK C_FB
------ ---- -------- ---- ------- ---- ------------ ----
Bill 1 Ribbits 1 Lemur 2 Dhalgren 1
Ferris 1 Freemont 2 Possum 1 Ubik 2
Nancy 2 Freemont 2 Lemur 2 Housekeeping 1
Nancy 2 Drew 1 Penguin 1 Ubik 2
You can then use that as a CTE (or subquery factoring, I think in oracle terminology) and pull only the highest-frequency value from each column:
with tmp_tab as (
select /*+ MATERIALIZE */
firstname, count(*) over (partition by firstname) as c_fn,
lastname, count(*) over (partition by lastname) as c_ln,
favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
favoritebook, count(*) over (partition by favoritebook) as c_fb
from my_table)
select (select firstname from (
select firstname,
row_number() over (partition by null order by c_fn desc) as r_fn
from tmp_tab
) where r_fn = 1) as firstname,
(select lastname from (
select lastname,
row_number() over (partition by null order by c_ln desc) as r_ln
from tmp_tab
) where r_ln = 1) as lastname,
(select favoriteanimal from (
select favoriteanimal,
row_number() over (partition by null order by c_fa desc) as r_fa
from tmp_tab
) where r_fa = 1) as favoriteanimal,
(select favoritebook from (
select favoritebook,
row_number() over (partition by null order by c_fb desc) as r_fb
from tmp_tab
) where r_fb = 1) as favoritebook
from dual;
FIRSTN LASTNAME FAVORIT FAVORITEBOOK
------ -------- ------- ------------
Nancy Freemont Lemur Ubik
You're doing one pass over the CTE for each column, but that should still only hit the real table once (thanks to the materialize hint). And you may want to add to the order by clauses to tweak what do to if there are ties.
This is similar in concept to what Thilo, ysth and others have suggested, except you're letting Oracle keep track of all the counting.
Edit: Hmm, explain plan shows it doing four full table scans; may need to think about this a bit more...
Edit 2: Adding the (undocumented) MATERIALIZE hint to the CTE seems to resolve this; it's creating a transient temporary table to hold the results, and only does one full table scan. The explain plan cost is higher though - at least on this time sample data set. Be interested in any comments on any downside to doing this.
qid & accept id:
(7315875, 7316118)
query:
SQL Query for an update of a column based on other column's data in a Table
soup:
According to your comment on the other answer,
\nUPDATE Network_Plant_Items\n SET FULL_ADDRESS = 'foobar' || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)\n WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL\n
\nIf you want to append this to the current value of FULL_ADDRESS, as I understand from the original question,
\nUPDATE Network_Plant_Items\n SET FULL_ADDRESS = FULL_ADDRESS || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)\n WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL\n
\nCOALESCE() returns the first non-NULL argument you pass to it. See Oracle's manual page on it.
\nJust as a general FIY, NVM() that was suggested by another answers is the old Oracle-specific version of COALESCE(), which works kinda the same - but it only supports two arguments and evaluates the second argument even if the first one is non-null (or in other words, its not short-circuit evaluated). Generally, it should be avoided and the standard COALESCE should be used instead, unless you explicitly need to evaluate all the arguments even when there's no need for it.
\n
soup wrap:
According to your comment on the other answer,
UPDATE Network_Plant_Items
SET FULL_ADDRESS = 'foobar' || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)
WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL
If you want to append this to the current value of FULL_ADDRESS, as I understand from the original question,
UPDATE Network_Plant_Items
SET FULL_ADDRESS = FULL_ADDRESS || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)
WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL
COALESCE() returns the first non-NULL argument you pass to it. See Oracle's manual page on it.
Just as a general FIY, NVM() that was suggested by another answers is the old Oracle-specific version of COALESCE(), which works kinda the same - but it only supports two arguments and evaluates the second argument even if the first one is non-null (or in other words, its not short-circuit evaluated). Generally, it should be avoided and the standard COALESCE should be used instead, unless you explicitly need to evaluate all the arguments even when there's no need for it.
qid & accept id:
(7322330, 7322424)
query:
use count in sql
soup:
This will work with most SQL DBMS, but shows the count value.
\nSELECT ID, Owner_ID, Owner_Count\n FROM AnonymousTable AS A\n JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count\n FROM AnonymousTable\n GROUP BY Owner_ID\n ) AS B ON B.Owner_ID = A.Owner_ID\n ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;\n
\nThis will work with some, but not necessarily all, DBMS; it orders by a column that is not shown in the result list:
\nSELECT ID, Owner_ID\n FROM AnonymousTable AS A\n JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count\n FROM AnonymousTable\n GROUP BY Owner_ID\n ) AS B ON B.Owner_ID = A.Owner_ID\n ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;\n
\n
soup wrap:
This will work with most SQL DBMS, but shows the count value.
SELECT ID, Owner_ID, Owner_Count
FROM AnonymousTable AS A
JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count
FROM AnonymousTable
GROUP BY Owner_ID
) AS B ON B.Owner_ID = A.Owner_ID
ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;
This will work with some, but not necessarily all, DBMS; it orders by a column that is not shown in the result list:
SELECT ID, Owner_ID
FROM AnonymousTable AS A
JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count
FROM AnonymousTable
GROUP BY Owner_ID
) AS B ON B.Owner_ID = A.Owner_ID
ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;
qid & accept id:
(7326337, 7326639)
query:
Updating a column based on values from other rows
soup:
Following your edit...
\nDECLARE @T TABLE\n(\nID INT,\nCategoryID CHAR(4),\nCode CHAR(4),\nStatus CHAR(4) NULL\n)\nINSERT INTO @T (ID,CategoryID, Code)\nSELECT 1,'A100',0012 UNION ALL SELECT 2,'A100',0012 UNION ALL\nSELECT 3,'A100',0055 UNION ALL SELECT 4,'A100',0012 UNION ALL\nSELECT 5,'B201',1116 UNION ALL SELECT 6,'B201',1116 UNION ALL\nSELECT 7,'B201',1121 UNION ALL SELECT 8,'B201',1024;\n\nWITH T AS\n(\nSELECT *, MIN(Code) OVER (PARTITION BY CategoryID ) AS MinCode\nfrom @T\n)\nUPDATE T\nSET Status = 'FAIL'\nWHERE Code <> MinCode\n\nSELECT *\nFROM @T\n
\nReturns
\nID CategoryID Code Status\n----------- ---------- ---- ------\n1 A100 12 NULL\n2 A100 12 NULL\n3 A100 55 FAIL\n4 A100 12 NULL\n5 B201 1116 FAIL\n6 B201 1116 FAIL\n7 B201 1121 FAIL\n8 B201 1024 NULL\n
\n
soup wrap:
Following your edit...
DECLARE @T TABLE
(
ID INT,
CategoryID CHAR(4),
Code CHAR(4),
Status CHAR(4) NULL
)
INSERT INTO @T (ID,CategoryID, Code)
SELECT 1,'A100',0012 UNION ALL SELECT 2,'A100',0012 UNION ALL
SELECT 3,'A100',0055 UNION ALL SELECT 4,'A100',0012 UNION ALL
SELECT 5,'B201',1116 UNION ALL SELECT 6,'B201',1116 UNION ALL
SELECT 7,'B201',1121 UNION ALL SELECT 8,'B201',1024;
WITH T AS
(
SELECT *, MIN(Code) OVER (PARTITION BY CategoryID ) AS MinCode
from @T
)
UPDATE T
SET Status = 'FAIL'
WHERE Code <> MinCode
SELECT *
FROM @T
Returns
ID CategoryID Code Status
----------- ---------- ---- ------
1 A100 12 NULL
2 A100 12 NULL
3 A100 55 FAIL
4 A100 12 NULL
5 B201 1116 FAIL
6 B201 1116 FAIL
7 B201 1121 FAIL
8 B201 1024 NULL
qid & accept id:
(7364969, 7774879)
query:
How to filter SQL results in a has-many-through relation
soup:
I was curious. And as we all know, curiosity has a reputation for killing cats.
\nSo, which is the fastest way to skin a cat?
\nThe precise cat-skinning environment for this test:
\n\n- PostgreSQL 9.0 on Debian Squeeze with decent RAM and settings.
\n- 6.000 students, 24.000 club memberships (data copied from a similar database with real life data.)
\n- Slight diversion from the naming schema in the question:
student.id is student.stud_id and club.id is club.club_id here. \n- I named the queries after their author in this thread, with an index where there are two.
\n- I ran all queries a couple of times to populate the cache, then I picked the best of 5 with EXPLAIN ANALYZE.
\nRelevant indexes (should be the optimum - as long as we lack fore-knowledge which clubs will be queried):
\nALTER TABLE student ADD CONSTRAINT student_pkey PRIMARY KEY(stud_id );\nALTER TABLE student_club ADD CONSTRAINT sc_pkey PRIMARY KEY(stud_id, club_id);\nALTER TABLE club ADD CONSTRAINT club_pkey PRIMARY KEY(club_id );\nCREATE INDEX sc_club_id_idx ON student_club (club_id);\n
\nclub_pkey is not required by most queries here.
\nPrimary keys implement unique indexes automatically In PostgreSQL.
\nThe last index is to make up for this known shortcoming of multi-column indexes on PostgreSQL:
\n
\n\nA multicolumn B-tree index can be used with query conditions that\n involve any subset of the index's columns, but the index is most\n efficient when there are constraints on the leading (leftmost)\n columns.
\n
\nResults:
\nTotal runtimes from EXPLAIN ANALYZE.
\n1) Martin 2: 44.594 ms
\nSELECT s.stud_id, s.name\nFROM student s\nJOIN student_club sc USING (stud_id)\nWHERE sc.club_id IN (30, 50)\nGROUP BY 1,2\nHAVING COUNT(*) > 1;\n
\n
\n2) Erwin 1: 33.217 ms
\nSELECT s.stud_id, s.name\nFROM student s\nJOIN (\n SELECT stud_id\n FROM student_club\n WHERE club_id IN (30, 50)\n GROUP BY 1\n HAVING COUNT(*) > 1\n ) sc USING (stud_id);\n
\n
\n3) Martin 1: 31.735 ms
\nSELECT s.stud_id, s.name\n FROM student s\n WHERE student_id IN (\n SELECT student_id\n FROM student_club\n WHERE club_id = 30\n INTERSECT\n SELECT stud_id\n FROM student_club\n WHERE club_id = 50);\n
\n
\n4) Derek: 2.287 ms
\nSELECT s.stud_id, s.name\nFROM student s\nWHERE s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 30)\nAND s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 50);\n
\n
\n5) Erwin 2: 2.181 ms
\nSELECT s.stud_id, s.name\nFROM student s\nWHERE EXISTS (SELECT * FROM student_club\n WHERE stud_id = s.stud_id AND club_id = 30)\nAND EXISTS (SELECT * FROM student_club\n WHERE stud_id = s.stud_id AND club_id = 50);\n
\n
\n6) Sean: 2.043 ms
\nSELECT s.stud_id, s.name\nFROM student s\nJOIN student_club x ON s.stud_id = x.stud_id\nJOIN student_club y ON s.stud_id = y.stud_id\nWHERE x.club_id = 30\nAND y.club_id = 50;\n
\nThe last three perform pretty much the same. 4) and 5) result in the same query plan.
\nLate Additions:
\nFancy SQL, but the performance can't keep up.
\n7) ypercube 1: 148.649 ms
\nSELECT s.stud_id, s.name\nFROM student AS s\nWHERE NOT EXISTS (\n SELECT *\n FROM club AS c \n WHERE c.club_id IN (30, 50)\n AND NOT EXISTS (\n SELECT *\n FROM student_club AS sc \n WHERE sc.stud_id = s.stud_id\n AND sc.club_id = c.club_id \n )\n );\n
\n
\n8) ypercube 2: 147.497 ms
\nSELECT s.stud_id, s.name\nFROM student AS s\nWHERE NOT EXISTS (\n SELECT *\n FROM (\n SELECT 30 AS club_id \n UNION ALL\n SELECT 50\n ) AS c\n WHERE NOT EXISTS (\n SELECT *\n FROM student_club AS sc \n WHERE sc.stud_id = s.stud_id\n AND sc.club_id = c.club_id \n )\n );\n
\nAs expected, those two perform almost the same. Query plan results in table scans, the planner doesn't find a way to use the indexes here.
\n
\n9) wildplasser 1: 49.849 ms
\nWITH RECURSIVE two AS (\n SELECT 1::int AS level\n , stud_id\n FROM student_club sc1\n WHERE sc1.club_id = 30\n UNION\n SELECT two.level + 1 AS level\n , sc2.stud_id\n FROM student_club sc2\n JOIN two USING (stud_id)\n WHERE sc2.club_id = 50\n AND two.level = 1\n )\nSELECT s.stud_id, s.student\nFROM student s\nJOIN two USING (studid)\nWHERE two.level > 1;\n
\nFancy SQL, decent performance for a CTE. Very exotic query plan.
\nAgain, would be interesting how 9.1 handles this. I am going to upgrade the db cluster used here to 9.1 soon. Maybe I'll rerun the whole shebang ...
\n
\n10) wildplasser 2: 36.986 ms
\nWITH sc AS (\n SELECT stud_id\n FROM student_club\n WHERE club_id IN (30,50)\n GROUP BY stud_id\n HAVING COUNT(*) > 1\n )\nSELECT s.*\nFROM student s\nJOIN sc USING (stud_id);\n
\nCTE variant of query 2). Surprisingly, it can result in a slightly different query plan with the exact same data. I found a sequential scan on student, where the subquery-variant used the index.
\n
\n11) ypercube 3: 101.482 ms
\nAnother late addition @ypercube. It is positively amazing, how many ways there are.
\nSELECT s.stud_id, s.student\nFROM student s\nJOIN student_club sc USING (stud_id)\nWHERE sc.club_id = 10 -- member in 1st club ...\nAND NOT EXISTS (\n SELECT *\n FROM (SELECT 14 AS club_id) AS c -- can't be excluded for missing the 2nd\n WHERE NOT EXISTS (\n SELECT *\n FROM student_club AS d\n WHERE d.stud_id = sc.stud_id\n AND d.club_id = c.club_id\n )\n )\n
\n
\n12) erwin 3: 2.377 ms
\n@ypercube's 11) is actually just the mind-twisting reverse approach of this simpler variant, that was also still missing. Performs almost as fast as the top cats.
\nSELECT s.*\nFROM student s\nJOIN student_club x USING (stud_id)\nWHERE sc.club_id = 10 -- member in 1st club ...\nAND EXISTS ( -- ... and membership in 2nd exists\n SELECT *\n FROM student_club AS y\n WHERE y.stud_id = s.stud_id\n AND y.club_id = 14\n )\n
\n13) erwin 4: 2.375 ms
\nHard to believe, but here's another, genuinely new variant. I see potential for more than two memberships, but it also ranks among the top cats with just two.
\nSELECT s.*\nFROM student AS s\nWHERE EXISTS (\n SELECT *\n FROM student_club AS x\n JOIN student_club AS y USING (stud_id)\n WHERE x.stud_id = s.stud_id\n AND x.club_id = 14\n AND y.club_id = 10\n )\n
\nDynamic number of club memberships
\nIn other words: varying number of filters. This question asked for exactly two club memberships. But many use cases have to prepare for a varying number.
\nDetailed discussion in this related later answer:
\n\n
soup wrap:
I was curious. And as we all know, curiosity has a reputation for killing cats.
So, which is the fastest way to skin a cat?
The precise cat-skinning environment for this test:
- PostgreSQL 9.0 on Debian Squeeze with decent RAM and settings.
- 6.000 students, 24.000 club memberships (data copied from a similar database with real life data.)
- Slight diversion from the naming schema in the question:
student.id is student.stud_id and club.id is club.club_id here.
- I named the queries after their author in this thread, with an index where there are two.
- I ran all queries a couple of times to populate the cache, then I picked the best of 5 with EXPLAIN ANALYZE.
Relevant indexes (should be the optimum - as long as we lack fore-knowledge which clubs will be queried):
ALTER TABLE student ADD CONSTRAINT student_pkey PRIMARY KEY(stud_id );
ALTER TABLE student_club ADD CONSTRAINT sc_pkey PRIMARY KEY(stud_id, club_id);
ALTER TABLE club ADD CONSTRAINT club_pkey PRIMARY KEY(club_id );
CREATE INDEX sc_club_id_idx ON student_club (club_id);
club_pkey is not required by most queries here.
Primary keys implement unique indexes automatically In PostgreSQL.
The last index is to make up for this known shortcoming of multi-column indexes on PostgreSQL:
A multicolumn B-tree index can be used with query conditions that
involve any subset of the index's columns, but the index is most
efficient when there are constraints on the leading (leftmost)
columns.
Results:
Total runtimes from EXPLAIN ANALYZE.
1) Martin 2: 44.594 ms
SELECT s.stud_id, s.name
FROM student s
JOIN student_club sc USING (stud_id)
WHERE sc.club_id IN (30, 50)
GROUP BY 1,2
HAVING COUNT(*) > 1;
2) Erwin 1: 33.217 ms
SELECT s.stud_id, s.name
FROM student s
JOIN (
SELECT stud_id
FROM student_club
WHERE club_id IN (30, 50)
GROUP BY 1
HAVING COUNT(*) > 1
) sc USING (stud_id);
3) Martin 1: 31.735 ms
SELECT s.stud_id, s.name
FROM student s
WHERE student_id IN (
SELECT student_id
FROM student_club
WHERE club_id = 30
INTERSECT
SELECT stud_id
FROM student_club
WHERE club_id = 50);
4) Derek: 2.287 ms
SELECT s.stud_id, s.name
FROM student s
WHERE s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 30)
AND s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 50);
5) Erwin 2: 2.181 ms
SELECT s.stud_id, s.name
FROM student s
WHERE EXISTS (SELECT * FROM student_club
WHERE stud_id = s.stud_id AND club_id = 30)
AND EXISTS (SELECT * FROM student_club
WHERE stud_id = s.stud_id AND club_id = 50);
6) Sean: 2.043 ms
SELECT s.stud_id, s.name
FROM student s
JOIN student_club x ON s.stud_id = x.stud_id
JOIN student_club y ON s.stud_id = y.stud_id
WHERE x.club_id = 30
AND y.club_id = 50;
The last three perform pretty much the same. 4) and 5) result in the same query plan.
Late Additions:
Fancy SQL, but the performance can't keep up.
7) ypercube 1: 148.649 ms
SELECT s.stud_id, s.name
FROM student AS s
WHERE NOT EXISTS (
SELECT *
FROM club AS c
WHERE c.club_id IN (30, 50)
AND NOT EXISTS (
SELECT *
FROM student_club AS sc
WHERE sc.stud_id = s.stud_id
AND sc.club_id = c.club_id
)
);
8) ypercube 2: 147.497 ms
SELECT s.stud_id, s.name
FROM student AS s
WHERE NOT EXISTS (
SELECT *
FROM (
SELECT 30 AS club_id
UNION ALL
SELECT 50
) AS c
WHERE NOT EXISTS (
SELECT *
FROM student_club AS sc
WHERE sc.stud_id = s.stud_id
AND sc.club_id = c.club_id
)
);
As expected, those two perform almost the same. Query plan results in table scans, the planner doesn't find a way to use the indexes here.
9) wildplasser 1: 49.849 ms
WITH RECURSIVE two AS (
SELECT 1::int AS level
, stud_id
FROM student_club sc1
WHERE sc1.club_id = 30
UNION
SELECT two.level + 1 AS level
, sc2.stud_id
FROM student_club sc2
JOIN two USING (stud_id)
WHERE sc2.club_id = 50
AND two.level = 1
)
SELECT s.stud_id, s.student
FROM student s
JOIN two USING (studid)
WHERE two.level > 1;
Fancy SQL, decent performance for a CTE. Very exotic query plan.
Again, would be interesting how 9.1 handles this. I am going to upgrade the db cluster used here to 9.1 soon. Maybe I'll rerun the whole shebang ...
10) wildplasser 2: 36.986 ms
WITH sc AS (
SELECT stud_id
FROM student_club
WHERE club_id IN (30,50)
GROUP BY stud_id
HAVING COUNT(*) > 1
)
SELECT s.*
FROM student s
JOIN sc USING (stud_id);
CTE variant of query 2). Surprisingly, it can result in a slightly different query plan with the exact same data. I found a sequential scan on student, where the subquery-variant used the index.
11) ypercube 3: 101.482 ms
Another late addition @ypercube. It is positively amazing, how many ways there are.
SELECT s.stud_id, s.student
FROM student s
JOIN student_club sc USING (stud_id)
WHERE sc.club_id = 10 -- member in 1st club ...
AND NOT EXISTS (
SELECT *
FROM (SELECT 14 AS club_id) AS c -- can't be excluded for missing the 2nd
WHERE NOT EXISTS (
SELECT *
FROM student_club AS d
WHERE d.stud_id = sc.stud_id
AND d.club_id = c.club_id
)
)
12) erwin 3: 2.377 ms
@ypercube's 11) is actually just the mind-twisting reverse approach of this simpler variant, that was also still missing. Performs almost as fast as the top cats.
SELECT s.*
FROM student s
JOIN student_club x USING (stud_id)
WHERE sc.club_id = 10 -- member in 1st club ...
AND EXISTS ( -- ... and membership in 2nd exists
SELECT *
FROM student_club AS y
WHERE y.stud_id = s.stud_id
AND y.club_id = 14
)
13) erwin 4: 2.375 ms
Hard to believe, but here's another, genuinely new variant. I see potential for more than two memberships, but it also ranks among the top cats with just two.
SELECT s.*
FROM student AS s
WHERE EXISTS (
SELECT *
FROM student_club AS x
JOIN student_club AS y USING (stud_id)
WHERE x.stud_id = s.stud_id
AND x.club_id = 14
AND y.club_id = 10
)
Dynamic number of club memberships
In other words: varying number of filters. This question asked for exactly two club memberships. But many use cases have to prepare for a varying number.
Detailed discussion in this related later answer:
qid & accept id:
(7392374, 7392728)
query:
Calculate column in view based on other column values
soup:
Editted as per @HLGM comments to make it a bit more robust.
\nNote that in it's current form, I assume that when
\n\n- all alarms equal 1, the range should be
NULL \n- only one alarm equals 0, the range is the value of this alarm.
\n
\nIf this does not suffice, OP might clarify what should be returned instead.
\nSQL Statement
\n ;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (\n SELECT 12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0\n UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0\n UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1\n UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1\n )\n , AddRowNumbers AS (\n SELECT rowNumber = ROW_NUMBER() OVER (ORDER BY C1)\n , C1, C1Alarm\n , C2, C2Alarm\n , C3, C3Alarm\n , C4, C4Alarm\n FROM Alarm \n )\n , UnPivotColumns AS (\n SELECT rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0\n UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0\n UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0\n UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0\n )\n SELECT C1, C1Alarm\n , C2, C2Alarm\n , C3, C3Alarm\n , C4, C4Alarm\n , COALESCE(range1.range, range2.range)\n FROM AddRowNumbers rowNumber\n LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber\n LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber \n
\nTest script
\n;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (\n SELECT 12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0\n UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0\n UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1\n UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1\n)\n, AddRowNumbers AS (\n SELECT rowNumber = ROW_NUMBER() OVER (ORDER BY C1)\n , C1, C1Alarm\n , C2, C2Alarm\n , C3, C3Alarm\n , C4, C4Alarm\n FROM Alarm \n)\n, UnPivotColumns AS (\n SELECT rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0\n UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0\n UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0\n UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0\n)\nSELECT C1, C1Alarm\n , C2, C2Alarm\n , C3, C3Alarm\n , C4, C4Alarm\n , COALESCE(range1.range, range2.range)\nFROM AddRowNumbers rowNumber\n LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber\n LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber \n
\n
soup wrap:
Editted as per @HLGM comments to make it a bit more robust.
Note that in it's current form, I assume that when
- all alarms equal 1, the range should be
NULL
- only one alarm equals 0, the range is the value of this alarm.
If this does not suffice, OP might clarify what should be returned instead.
SQL Statement
;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (
SELECT 12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0
UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0
UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1
UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1
)
, AddRowNumbers AS (
SELECT rowNumber = ROW_NUMBER() OVER (ORDER BY C1)
, C1, C1Alarm
, C2, C2Alarm
, C3, C3Alarm
, C4, C4Alarm
FROM Alarm
)
, UnPivotColumns AS (
SELECT rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0
UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0
UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0
UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0
)
SELECT C1, C1Alarm
, C2, C2Alarm
, C3, C3Alarm
, C4, C4Alarm
, COALESCE(range1.range, range2.range)
FROM AddRowNumbers rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber
Test script
;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (
SELECT 12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0
UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0
UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1
UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1
)
, AddRowNumbers AS (
SELECT rowNumber = ROW_NUMBER() OVER (ORDER BY C1)
, C1, C1Alarm
, C2, C2Alarm
, C3, C3Alarm
, C4, C4Alarm
FROM Alarm
)
, UnPivotColumns AS (
SELECT rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0
UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0
UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0
UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0
)
SELECT C1, C1Alarm
, C2, C2Alarm
, C3, C3Alarm
, C4, C4Alarm
, COALESCE(range1.range, range2.range)
FROM AddRowNumbers rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber
qid & accept id:
(7432065, 7434118)
query:
How can I do sql union in cake php?
soup:
You can do this in 4 or more different ways... the easiest but not recomended is using
\n$this->Model->query($query); \n
\nwhere $query is the query stated above.
\nThe second way but may not be what you want, is to redo your sql query you will get same result (but not separated with the alias) like this:
\nSELECT * FROM `videos` AS `U1` \nWHERE `U1`.`level_id` = '1' AND (`U1`.`submitted_date` > '2011-09-11' OR `U1`.`submitted_date` < '2011-09-11')\nORDER BY submitted_date DESC\nLIMIT 0,10\n
\nThis query can be easily done with find like this
\n$conditions = array(\n 'Video.level_id'=>1,\n 'OR' => array(\n 'Video.submitted_date <'=> '2011-09-11',\n 'Video.submitted_date >'=> '2011-09-11'\n )\n);\n$this->Video->find('all', array('conditions'=>$conditions)) \n
\nThe third way will be the one that Abba Bryant talk about, explained in detail here Union syntax in cakePhp that works building the statement directly.
\nThe fourth way will like the first one more less, you will have to create a behaviour that have a beforeFind function and there you will have to check if a option union and create the query or to create something like the the third option.
\nyou will call it with a find like this
\n$this->Video->find('all', array('conditions'=>$conditions, 'union'=> $union));\n
\nThis will be something more less like the linkable or containable behavior.
\nThe fith way is to modified your cakephp sql driver... this one, i don't really know the changes you have to do, but it is a way to get to that... This drivers are the responsible to interpret and create the queries, connect to db and execute the queries...
\nREMEMBER that cakephp find do the checks neccesary to prevent SQLInyection and other risks... the $model->query will NOT do this tests so be carefull
\n
soup wrap:
You can do this in 4 or more different ways... the easiest but not recomended is using
$this->Model->query($query);
where $query is the query stated above.
The second way but may not be what you want, is to redo your sql query you will get same result (but not separated with the alias) like this:
SELECT * FROM `videos` AS `U1`
WHERE `U1`.`level_id` = '1' AND (`U1`.`submitted_date` > '2011-09-11' OR `U1`.`submitted_date` < '2011-09-11')
ORDER BY submitted_date DESC
LIMIT 0,10
This query can be easily done with find like this
$conditions = array(
'Video.level_id'=>1,
'OR' => array(
'Video.submitted_date <'=> '2011-09-11',
'Video.submitted_date >'=> '2011-09-11'
)
);
$this->Video->find('all', array('conditions'=>$conditions))
The third way will be the one that Abba Bryant talk about, explained in detail here Union syntax in cakePhp that works building the statement directly.
The fourth way will like the first one more less, you will have to create a behaviour that have a beforeFind function and there you will have to check if a option union and create the query or to create something like the the third option.
you will call it with a find like this
$this->Video->find('all', array('conditions'=>$conditions, 'union'=> $union));
This will be something more less like the linkable or containable behavior.
The fith way is to modified your cakephp sql driver... this one, i don't really know the changes you have to do, but it is a way to get to that... This drivers are the responsible to interpret and create the queries, connect to db and execute the queries...
REMEMBER that cakephp find do the checks neccesary to prevent SQLInyection and other risks... the $model->query will NOT do this tests so be carefull
qid & accept id:
(7557231, 7557630)
query:
Select * from n tables
soup:
to list ALL tables you could try :
\nEXEC sp_msforeachtable 'SELECT * FROM ?'\n
\nyou can programmability include/exclude table by doing something like:
\nEXEC sp_msforeachtable 'IF LEFT(''?'',9)=''[dbo].[xy'' BEGIN SELECT * FROM ? END ELSE PRINT LEFT(''?'',9)'\n
\n
soup wrap:
to list ALL tables you could try :
EXEC sp_msforeachtable 'SELECT * FROM ?'
you can programmability include/exclude table by doing something like:
EXEC sp_msforeachtable 'IF LEFT(''?'',9)=''[dbo].[xy'' BEGIN SELECT * FROM ? END ELSE PRINT LEFT(''?'',9)'
qid & accept id:
(7558371, 7558470)
query:
10g Package Construction - Restricting References
soup:
You cannot refer using static SQL to objects that do not exist when the code is compiled. There is nothing you can do about that.
\nYou would need to modify your code to use dynamic SQL to refer to any object that is created at runtime. You can probably use EXECUTE IMMEDIATE, i.e.
\nEXECUTE IMMEDIATE \n 'SELECT COUNT(*) FROM new_mv_name'\n INTO l_cnt;\n
\nrather than
\nSELECT COUNT(*)\n INTO l_cnt\n FROM new_mv_name;\n
\nThat being said, however, I would be extremely dubious about a PL/SQL implementation that involved creating any new tables and materialized views at runtime. That is almost always a mistake in Oracle. Why do you need to create new objects at runtime?
\n
soup wrap:
You cannot refer using static SQL to objects that do not exist when the code is compiled. There is nothing you can do about that.
You would need to modify your code to use dynamic SQL to refer to any object that is created at runtime. You can probably use EXECUTE IMMEDIATE, i.e.
EXECUTE IMMEDIATE
'SELECT COUNT(*) FROM new_mv_name'
INTO l_cnt;
rather than
SELECT COUNT(*)
INTO l_cnt
FROM new_mv_name;
That being said, however, I would be extremely dubious about a PL/SQL implementation that involved creating any new tables and materialized views at runtime. That is almost always a mistake in Oracle. Why do you need to create new objects at runtime?
qid & accept id:
(7605630, 7605650)
query:
What's the best approach to dynamically display a single product from the database?
soup:
If you are using MSSql you can order by the newId() function to randomly get a row of data. You still need a service/page on the server side to run this code for you.
\nselect top 1 productName, sku\nfrom products\norder by newid()\n
\nfor MySql this would suffice
\nSELECT productName, sku\nFROM products\nORDER BY Rand()\nLIMIT 1\n
\n
soup wrap:
If you are using MSSql you can order by the newId() function to randomly get a row of data. You still need a service/page on the server side to run this code for you.
select top 1 productName, sku
from products
order by newid()
for MySql this would suffice
SELECT productName, sku
FROM products
ORDER BY Rand()
LIMIT 1
qid & accept id:
(7656057, 7658392)
query:
How to make temporary table with row for each of last 24 hours?
soup:
One row for each hour for a given date (SQL Server solution).
\nselect dateadd(hour, Number, '20110101')\nfrom master..spt_values\nwhere type = 'P' and\n number between 0 and 23\n
\n\nresult with a row for each hour in last 24 hours
\n
\nselect dateadd(hour, datediff(hour, 0, getdate()) - number, 0)\nfrom master..spt_values\nwhere type = 'P' and\n number between 0 and 23\n
\n
soup wrap:
One row for each hour for a given date (SQL Server solution).
select dateadd(hour, Number, '20110101')
from master..spt_values
where type = 'P' and
number between 0 and 23
result with a row for each hour in last 24 hours
select dateadd(hour, datediff(hour, 0, getdate()) - number, 0)
from master..spt_values
where type = 'P' and
number between 0 and 23
qid & accept id:
(7676110, 7676269)
query:
How to remove duplicates from table using SQL query
soup:
It looks like all four column values are duplicated so you can do this -
\nselect distinct emp_name, emp_address, sex, marital_status\nfrom YourTable\n
\nHowever if marital status can be different and you have some other column based on which to choose (for eg you want latest record based on a column create_date) you can do this
\nselect emp_name, emp_address, sex, marital_status\nfrom YourTable a\nwhere not exists (select 1 \n from YourTable b\n where b.emp_name = a.emp_name and\n b.emp_address = a.emp_address and\n b.sex = a.sex and\n b.create_date >= a.create_date)\n
\n
soup wrap:
It looks like all four column values are duplicated so you can do this -
select distinct emp_name, emp_address, sex, marital_status
from YourTable
However if marital status can be different and you have some other column based on which to choose (for eg you want latest record based on a column create_date) you can do this
select emp_name, emp_address, sex, marital_status
from YourTable a
where not exists (select 1
from YourTable b
where b.emp_name = a.emp_name and
b.emp_address = a.emp_address and
b.sex = a.sex and
b.create_date >= a.create_date)
qid & accept id:
(7681122, 7681158)
query:
Oracle - Modify an existing table to auto-increment a column
soup:
If your MAX(noteid) is 799, then try:
\nCREATE SEQUENCE noteseq\n START WITH 800\n INCREMENT BY 1\n
\nThen when inserting a new record, for the NOTEID column, you would do:
\nnoteseq.nextval\n
\n
soup wrap:
If your MAX(noteid) is 799, then try:
CREATE SEQUENCE noteseq
START WITH 800
INCREMENT BY 1
Then when inserting a new record, for the NOTEID column, you would do:
noteseq.nextval
qid & accept id:
(7745609, 7745635)
query:
SQL select only rows with max value on a column
soup:
At first glance...
\nAll you need is a GROUP BY clause with the MAX aggregate function:
\nSELECT id, MAX(rev)\nFROM YourTable\nGROUP BY id\n
\nIt's never that simple, is it?
\nI just noticed you need the content column as well.
\nThis is a very common question in SQL: find the whole data for the row with some max value in a column per some group identifier. I heard that a lot during my career. Actually, it was one the questions I answered in my current job's technical interview.
\nIt is, actually, so common that StackOverflow community has created a single tag just to deal with questions like that: greatest-n-per-group.
\nBasically, you have two approaches to solve that problem:
\nJoining with simple group-identifier, max-value-in-group Sub-query
\nIn this approach, you first find the group-identifier, max-value-in-group (already solved above) in a sub-query. Then you join your table to the sub-query with equality on both group-identifier and max-value-in-group:
\nSELECT a.id, a.rev, a.contents\nFROM YourTable a\nINNER JOIN (\n SELECT id, MAX(rev) rev\n FROM YourTable\n GROUP BY id\n) b ON a.id = b.id AND a.rev = b.rev\n
\nLeft Joining with self, tweaking join conditions and filters
\nIn this approach, you left join the table with itself. Equality, of course, goes in the group-identifier. Then, 2 smart moves:
\n\n- The second join condition is having left side value less than right value
\n- When you do step 1, the row(s) that actually have the max value will have
NULL in the right side (it's a LEFT JOIN, remember?). Then, we filter the joined result, showing only the rows where the right side is NULL. \n
\nSo you end up with:
\nSELECT a.*\nFROM YourTable a\nLEFT OUTER JOIN YourTable b\n ON a.id = b.id AND a.rev < b.rev\nWHERE b.id IS NULL;\n
\nConclusion
\nBoth approaches bring the exact same result.
\nIf you have two rows with max-value-in-group for group-identifier, both rows will be in the result in both approaches.
\nBoth approaches are SQL ANSI compatible, thus, will work with your favorite RDBMS, regardless of its "flavor".
\nBoth approaches are also performance friendly, however your mileage may vary (RDBMS, DB Structure, Indexes, etc.). So when you pick one approach over the other, benchmark. And make sure you pick the one which make most of sense to you.
\n
soup wrap:
At first glance...
All you need is a GROUP BY clause with the MAX aggregate function:
SELECT id, MAX(rev)
FROM YourTable
GROUP BY id
It's never that simple, is it?
I just noticed you need the content column as well.
This is a very common question in SQL: find the whole data for the row with some max value in a column per some group identifier. I heard that a lot during my career. Actually, it was one the questions I answered in my current job's technical interview.
It is, actually, so common that StackOverflow community has created a single tag just to deal with questions like that: greatest-n-per-group.
Basically, you have two approaches to solve that problem:
Joining with simple group-identifier, max-value-in-group Sub-query
In this approach, you first find the group-identifier, max-value-in-group (already solved above) in a sub-query. Then you join your table to the sub-query with equality on both group-identifier and max-value-in-group:
SELECT a.id, a.rev, a.contents
FROM YourTable a
INNER JOIN (
SELECT id, MAX(rev) rev
FROM YourTable
GROUP BY id
) b ON a.id = b.id AND a.rev = b.rev
Left Joining with self, tweaking join conditions and filters
In this approach, you left join the table with itself. Equality, of course, goes in the group-identifier. Then, 2 smart moves:
- The second join condition is having left side value less than right value
- When you do step 1, the row(s) that actually have the max value will have
NULL in the right side (it's a LEFT JOIN, remember?). Then, we filter the joined result, showing only the rows where the right side is NULL.
So you end up with:
SELECT a.*
FROM YourTable a
LEFT OUTER JOIN YourTable b
ON a.id = b.id AND a.rev < b.rev
WHERE b.id IS NULL;
Conclusion
Both approaches bring the exact same result.
If you have two rows with max-value-in-group for group-identifier, both rows will be in the result in both approaches.
Both approaches are SQL ANSI compatible, thus, will work with your favorite RDBMS, regardless of its "flavor".
Both approaches are also performance friendly, however your mileage may vary (RDBMS, DB Structure, Indexes, etc.). So when you pick one approach over the other, benchmark. And make sure you pick the one which make most of sense to you.
qid & accept id:
(7748125, 7748276)
query:
SQL find two consecutive days in a reservation system
soup:
Just join to the availability table twice
\nSELECT rooms.* FROM rooms, availability as a1, availability as a2\nWHERE rooms.id = 123\nAND a1.room_id = rooms.id\nAND a2.room_id= rooms.id\nAND a1.date_occupied + 1 = a2.date_occupied\n
\nor, if we're not into writing SQL like its 1985:
\nSELECT rooms.* FROM rooms\nJOIN availability a1 on a1.room_id = rooms.id\nJoin availability a2 on a2.room_id = rooms.id AND a1.date_occupied + 1 = a2.date_occupied\nWHERE rooms.id = 123\n
\n
soup wrap:
Just join to the availability table twice
SELECT rooms.* FROM rooms, availability as a1, availability as a2
WHERE rooms.id = 123
AND a1.room_id = rooms.id
AND a2.room_id= rooms.id
AND a1.date_occupied + 1 = a2.date_occupied
or, if we're not into writing SQL like its 1985:
SELECT rooms.* FROM rooms
JOIN availability a1 on a1.room_id = rooms.id
Join availability a2 on a2.room_id = rooms.id AND a1.date_occupied + 1 = a2.date_occupied
WHERE rooms.id = 123
qid & accept id:
(7763635, 7763673)
query:
SQL Sort by popularity?
soup:
SELECT PP.playgroup_id, COUNT(*) cnt\nFROM playgroup_players PP\nGROUP BY PP.playgroup_id\nORDER BY COUNT(*) DESC\n
\nThis will give you a list of playgroups that have at least one player sorted by the number of players. Of course, field name is made up :)
\nSELECT G.playgroup_id, COUNT(PP.playgroup_id) cnt\nFROM playgroup G\n LEFT OUTER JOIN playgroup_players PP ON (PP.playgroup_id=G.playgroup_id)\nGROUP BY G.playgroup_id\nORDER BY COUNT(*) DESC\n
\nThis should give you a list of ALL playgroups (even the ones with no players). I've tested this on Oracle and on some of my own data and it works
\n
soup wrap:
SELECT PP.playgroup_id, COUNT(*) cnt
FROM playgroup_players PP
GROUP BY PP.playgroup_id
ORDER BY COUNT(*) DESC
This will give you a list of playgroups that have at least one player sorted by the number of players. Of course, field name is made up :)
SELECT G.playgroup_id, COUNT(PP.playgroup_id) cnt
FROM playgroup G
LEFT OUTER JOIN playgroup_players PP ON (PP.playgroup_id=G.playgroup_id)
GROUP BY G.playgroup_id
ORDER BY COUNT(*) DESC
This should give you a list of ALL playgroups (even the ones with no players). I've tested this on Oracle and on some of my own data and it works
qid & accept id:
(7794875, 7795191)
query:
Join a table to itself
soup:
You can perfectly join the table with it self.
\nYou should be aware, however, that your design allows you to have multiple levels of hierarchy. Since you are using SQL Server (assuming 2005 or higher), you can have a recursive CTE get your tree structure.
\nProof of concept preparation:
\ndeclare @YourTable table (id int, parentid int, title varchar(20))\n\ninsert into @YourTable values\n(1,null, 'root'),\n(2,1, 'something'),\n(3,1, 'in the way'),\n(4,1, 'she moves'),\n(5,3, ''),\n(6,null, 'I don''t know'),\n(7,6, 'Stick around');\n
\nQuery 1 - Node Levels:
\nwith cte as (\n select Id, ParentId, Title, 1 level \n from @YourTable where ParentId is null\n\n union all\n\n select yt.Id, yt.ParentId, yt.Title, cte.level + 1\n from @YourTable yt inner join cte on cte.Id = yt.ParentId\n)\nselect cte.*\nfrom cte \norder by level, id, Title\n
\n
soup wrap:
You can perfectly join the table with it self.
You should be aware, however, that your design allows you to have multiple levels of hierarchy. Since you are using SQL Server (assuming 2005 or higher), you can have a recursive CTE get your tree structure.
Proof of concept preparation:
declare @YourTable table (id int, parentid int, title varchar(20))
insert into @YourTable values
(1,null, 'root'),
(2,1, 'something'),
(3,1, 'in the way'),
(4,1, 'she moves'),
(5,3, ''),
(6,null, 'I don''t know'),
(7,6, 'Stick around');
Query 1 - Node Levels:
with cte as (
select Id, ParentId, Title, 1 level
from @YourTable where ParentId is null
union all
select yt.Id, yt.ParentId, yt.Title, cte.level + 1
from @YourTable yt inner join cte on cte.Id = yt.ParentId
)
select cte.*
from cte
order by level, id, Title
qid & accept id:
(7830197, 7830231)
query:
Remove values in comma separated list from database
soup:
Using these user-defined REGEXP_REPLACE() functions, you may be able to replace it with an empty string:
\nUPDATE children SET wishes = REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '') WHERE caseNum='whatever';\n
\nUnfortunately, you cannot just use plain old REPLACE() because you don't know where in the string 'Surfboard' appears. In fact, the regex above would probably need additional tweaking if 'Surfboard' occurs at the beginning or end.
\nPerhaps you could trim off leading and trailing commas left over like this:
\nUPDATE children SET wishes = TRIM(BOTH ',' FROM REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '')) WHERE caseNum='whatever';\n
\nSo what's going on here? The regex removes 'Surfboard' plus an optional comma & space before it. Then the surrounding TRIM() function eliminates a possible leading comma in case 'Surfboard' occurred at the beginning of the string. That could probably be handled by the regex as well, but frankly, I'm too tired to puzzle it out.
\nNote, I've never used these myself and cannot vouch for their effectiveness or robustness, but it is a place to start. And, as others are mentioning in the comments, you really should have these in a normalized wishlist table, rather than as a comma-separated string.
\nUpdate
\nThinking about this more, I'm more partial to just forcing the use of built-in REPLACE() and then cleaning out the extra comma where you may get two commas in a row. This is looking for two commas side by side, as though there had been no spaces separating your original list items. If the items had been separated by commas and spaces, change ',,' to ', ,' in the outer REPLACE() call.
\nUPDATE children SET wishes = TRIM(BOTH ',' FROM REPLACE(REPLACE(wishes, 'Surfboard', ''), ',,', ',')) WHERE caseNum='whatever';\n
\n
soup wrap:
Using these user-defined REGEXP_REPLACE() functions, you may be able to replace it with an empty string:
UPDATE children SET wishes = REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '') WHERE caseNum='whatever';
Unfortunately, you cannot just use plain old REPLACE() because you don't know where in the string 'Surfboard' appears. In fact, the regex above would probably need additional tweaking if 'Surfboard' occurs at the beginning or end.
Perhaps you could trim off leading and trailing commas left over like this:
UPDATE children SET wishes = TRIM(BOTH ',' FROM REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '')) WHERE caseNum='whatever';
So what's going on here? The regex removes 'Surfboard' plus an optional comma & space before it. Then the surrounding TRIM() function eliminates a possible leading comma in case 'Surfboard' occurred at the beginning of the string. That could probably be handled by the regex as well, but frankly, I'm too tired to puzzle it out.
Note, I've never used these myself and cannot vouch for their effectiveness or robustness, but it is a place to start. And, as others are mentioning in the comments, you really should have these in a normalized wishlist table, rather than as a comma-separated string.
Update
Thinking about this more, I'm more partial to just forcing the use of built-in REPLACE() and then cleaning out the extra comma where you may get two commas in a row. This is looking for two commas side by side, as though there had been no spaces separating your original list items. If the items had been separated by commas and spaces, change ',,' to ', ,' in the outer REPLACE() call.
UPDATE children SET wishes = TRIM(BOTH ',' FROM REPLACE(REPLACE(wishes, 'Surfboard', ''), ',,', ',')) WHERE caseNum='whatever';
qid & accept id:
(7901416, 7901490)
query:
Best way to update table with values calculated from same table
soup:
try creating a temp table in memory:
\nDECLARE @temp_receipts TABLE (\nAssociatedReceiptID int,\nsum_value int)\n
\nthen:
\ninsert into @temp_receipts\nSELECT AssociatedReceiptID, sum(Value)\nFROM Receipt\nGROUP BY AssociatedReceiptID\n
\nand then update the main table totals:
\nUPDATE Receipt r\nSET Total = (SELECT sum_value\n FROM @temp_receipts tt\n WHERE r.AssociatedReceiptID = tt.AssociatedReceiptID)\n
\nHowever, I would create a table called receipt_totals or something and use that instead. It makes no sense to have the total of each associated receipt in every single related row. if you are doing it for query convenience consider creating a view between receipts and receipt_totals
\n
soup wrap:
try creating a temp table in memory:
DECLARE @temp_receipts TABLE (
AssociatedReceiptID int,
sum_value int)
then:
insert into @temp_receipts
SELECT AssociatedReceiptID, sum(Value)
FROM Receipt
GROUP BY AssociatedReceiptID
and then update the main table totals:
UPDATE Receipt r
SET Total = (SELECT sum_value
FROM @temp_receipts tt
WHERE r.AssociatedReceiptID = tt.AssociatedReceiptID)
However, I would create a table called receipt_totals or something and use that instead. It makes no sense to have the total of each associated receipt in every single related row. if you are doing it for query convenience consider creating a view between receipts and receipt_totals
qid & accept id:
(7905182, 7905222)
query:
How do I turn off this error temporarily while I delete a record?
soup:
\nI see that there are some keys set with references between the tables\n how do I just force the deletion anyway?
\n
\nYou can do this, but its probably better just to update or delete the rows in the referencing table
\nALTER TABLE InviteConfiguration NOCHECK CONSTRAINT ALL\n
\nor with a slightly smaller hammer
\n ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT FK_InviteConfiguration_Invite\n
\n
soup wrap:
I see that there are some keys set with references between the tables
how do I just force the deletion anyway?
You can do this, but its probably better just to update or delete the rows in the referencing table
ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT ALL
or with a slightly smaller hammer
ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT FK_InviteConfiguration_Invite
qid & accept id:
(7991363, 7991989)
query:
How to pull out schema of db from MySQL/phpMyAdmin?
soup:
Not sure exactly what you want. You can try one of these methods:
\n1) Use phpMyAdmin's export feature to export the database. PMA allows you to omit the data.
\n2) You can do the same using mysqldump. This command should export CREATE DATABASE/CREATE TABLE queries:
\nmysqldump -hlocalhost -uroot -proot --all-databases --no-data > create-database-and-tables.sql\n
\n3) You can pull information from mySQL schema tables. Most mySQL clients (phpMyAdmin, HeidiSQL etc) allow you to export result of queries as CSV. Some useful queries:
\n/*\n * DATABASE, TABLE, TYPE\n */\nSELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE\nFROM INFORMATION_SCHEMA.TABLES\nWHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')\nORDER BY TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE\n\n/*\n * DATABASE, TABLE, COLUMN, TYPE\n */\nSELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE /* ETC */\nFROM INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')\nORDER BY TABLE_SCHEMA, TABLE_NAME, ORDINAL_POSITION\n
\n
soup wrap:
Not sure exactly what you want. You can try one of these methods:
1) Use phpMyAdmin's export feature to export the database. PMA allows you to omit the data.
2) You can do the same using mysqldump. This command should export CREATE DATABASE/CREATE TABLE queries:
mysqldump -hlocalhost -uroot -proot --all-databases --no-data > create-database-and-tables.sql
3) You can pull information from mySQL schema tables. Most mySQL clients (phpMyAdmin, HeidiSQL etc) allow you to export result of queries as CSV. Some useful queries:
/*
* DATABASE, TABLE, TYPE
*/
SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')
ORDER BY TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE
/*
* DATABASE, TABLE, COLUMN, TYPE
*/
SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE /* ETC */
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')
ORDER BY TABLE_SCHEMA, TABLE_NAME, ORDINAL_POSITION
qid & accept id:
(7994408, 7994437)
query:
Use Alias in Select Query
soup:
You cannot do this:
\nSELECT (Complex SubQuery) AS A, (Another Sub Query WHERE ID = A) FROM TABLE\n
\nYou can however do this:
\nSELECT (Another Sub Query WHERE ID = A.somecolumn)\nFROM table\nJOIN SELECT (Complex SubQuery) AS A on (A.X = TABLE.Y)\n
\nOr
\nSELECT (Another Sub Query)\nFROM table\nWHERE table.afield IN (SELECT Complex SubQuery.otherfield)\n
\nThe problem is that you cannot refer to aliases like this in the SELECT and WHERE clauses, because they will not have evaluated by the time the select or where part is executed.
\nYou can also use a having clause, but having clauses do not use indexes and should be avoided if possible.
\n
soup wrap:
You cannot do this:
SELECT (Complex SubQuery) AS A, (Another Sub Query WHERE ID = A) FROM TABLE
You can however do this:
SELECT (Another Sub Query WHERE ID = A.somecolumn)
FROM table
JOIN SELECT (Complex SubQuery) AS A on (A.X = TABLE.Y)
Or
SELECT (Another Sub Query)
FROM table
WHERE table.afield IN (SELECT Complex SubQuery.otherfield)
The problem is that you cannot refer to aliases like this in the SELECT and WHERE clauses, because they will not have evaluated by the time the select or where part is executed.
You can also use a having clause, but having clauses do not use indexes and should be avoided if possible.
qid & accept id:
(8001083, 8001125)
query:
SQL: ORDER BY `date` AND START WHERE`value`="something"?
soup:
SELECT \n y.*\nFROM\n YourTable y\nWHERE\n y.date <= (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE')\nORDER BY\n y.date DESC\nLIMIT 4 OFFSET 0\n
\nUpdated:
\nSELECT \n y.*\nFROM\n YourTable y\nWHERE\n /* The colors 'before' blue */\n y.date < (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE') or\n /* And blue itself */\n y.color = 'BLUE'\nORDER BY\n y.date DESC\nLIMIT 4 OFFSET 0\n
\nSecond update to meet newly discovered criteria.
\nSELECT \n y.*\nFROM\n YourTable y,\n (SELECT yb.id, yb.date FROM yb WHERE color = 'GREEN') ys\nWHERE\n /* The colors 'before' green */\n y.date < ys.date or\n /* The colors on the same date as green, but with greater \n or equal id to green. This includes green itself.\n Note the parentheses here. */\n (y.date = ys.date and y.id >= ys.id)\nORDER BY\n y.date DESC\nLIMIT 4 OFFSET 0\n
\n
soup wrap:
SELECT
y.*
FROM
YourTable y
WHERE
y.date <= (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE')
ORDER BY
y.date DESC
LIMIT 4 OFFSET 0
Updated:
SELECT
y.*
FROM
YourTable y
WHERE
/* The colors 'before' blue */
y.date < (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE') or
/* And blue itself */
y.color = 'BLUE'
ORDER BY
y.date DESC
LIMIT 4 OFFSET 0
Second update to meet newly discovered criteria.
SELECT
y.*
FROM
YourTable y,
(SELECT yb.id, yb.date FROM yb WHERE color = 'GREEN') ys
WHERE
/* The colors 'before' green */
y.date < ys.date or
/* The colors on the same date as green, but with greater
or equal id to green. This includes green itself.
Note the parentheses here. */
(y.date = ys.date and y.id >= ys.id)
ORDER BY
y.date DESC
LIMIT 4 OFFSET 0
qid & accept id:
(8014982, 8015012)
query:
Is there a way to make a column's nullability depend on another column's nullability?
soup:
Assuming you are on SQL Server or something similar, you can do this with a CHECK constraint on your table. (Unfortunately, MySQL parses but ignores CHECK constraints, so you'd have to use a trigger for that platform.)
\nIf the table already exists:
\nALTER TABLE ADD CONSTRAINT CK_ExitDateReason\nCHECK (\n (ExitDate IS NULL AND ExitReason IS NULL) \n OR (ExitDate IS NOT NULL AND ExitReason IS NOT NULL) \n);\n
\nIf you are creating the table yourself:
\nCREATE TABLE dbo.Exit (\n ...\n\n , CONSTRAINT CK_ExitDateReason CHECK ...\n);\n
\nUsing a check constraint is preferable to using a trigger because:
\n\n- check constraints are more visible than triggers
\n- the constraint is part of the table definition, as opposed to code that is run separately, so it's logically cleaner
\n- I am willing to bet it is faster than a trigger too
\n
\n
soup wrap:
Assuming you are on SQL Server or something similar, you can do this with a CHECK constraint on your table. (Unfortunately, MySQL parses but ignores CHECK constraints, so you'd have to use a trigger for that platform.)
If the table already exists:
ALTER TABLE ADD CONSTRAINT CK_ExitDateReason
CHECK (
(ExitDate IS NULL AND ExitReason IS NULL)
OR (ExitDate IS NOT NULL AND ExitReason IS NOT NULL)
);
If you are creating the table yourself:
CREATE TABLE dbo.Exit (
...
, CONSTRAINT CK_ExitDateReason CHECK ...
);
Using a check constraint is preferable to using a trigger because:
- check constraints are more visible than triggers
- the constraint is part of the table definition, as opposed to code that is run separately, so it's logically cleaner
- I am willing to bet it is faster than a trigger too
qid & accept id:
(8015482, 8016442)
query:
How to merge time intervals in SQL Server
soup:
You can use a recursive CTE to build a list of dates and then count the distinct dates.
\ndeclare @T table\n(\n startDate date,\n endDate date\n);\n\ninsert into @T values\n('2011-01-01', '2011-01-05'),\n('2011-01-04', '2011-01-08'),\n('2011-01-11', '2011-01-15');\n\nwith C as\n(\n select startDate,\n endDate\n from @T\n union all\n select dateadd(day, 1, startDate),\n endDate\n from C\n where dateadd(day, 1, startDate) < endDate \n)\nselect count(distinct startDate) as DayCount\nfrom C\noption (MAXRECURSION 0)\n
\nResult:
\nDayCount\n-----------\n11\n
\nOr you can use a numbers table. Here I use master..spt_values:
\ndeclare @MinStartDate date\nselect @MinStartDate = min(startDate)\nfrom @T\n\nselect count(distinct N.number)\nfrom @T as T\n inner join master..spt_values as N\n on dateadd(day, N.Number, @MinStartDate) between T.startDate and dateadd(day, -1, T.endDate)\nwhere N.type = 'P' \n
\n
soup wrap:
You can use a recursive CTE to build a list of dates and then count the distinct dates.
declare @T table
(
startDate date,
endDate date
);
insert into @T values
('2011-01-01', '2011-01-05'),
('2011-01-04', '2011-01-08'),
('2011-01-11', '2011-01-15');
with C as
(
select startDate,
endDate
from @T
union all
select dateadd(day, 1, startDate),
endDate
from C
where dateadd(day, 1, startDate) < endDate
)
select count(distinct startDate) as DayCount
from C
option (MAXRECURSION 0)
Result:
DayCount
-----------
11
Or you can use a numbers table. Here I use master..spt_values:
declare @MinStartDate date
select @MinStartDate = min(startDate)
from @T
select count(distinct N.number)
from @T as T
inner join master..spt_values as N
on dateadd(day, N.Number, @MinStartDate) between T.startDate and dateadd(day, -1, T.endDate)
where N.type = 'P'
qid & accept id:
(8030624, 8030698)
query:
Checking if specific tuple exists in table
soup:
Join Test to itself thusly:
\nselect t1.A, t1.B\nfrom Test t1\njoin Test t2 on t1.A = t2.B and t1.B = t2.A\n
\nOr use an intersection:
\nselect A, B from Test\nintersect\nselect B, A from Test\n
\nThe self-join would probably be faster though.
\n
soup wrap:
Join Test to itself thusly:
select t1.A, t1.B
from Test t1
join Test t2 on t1.A = t2.B and t1.B = t2.A
Or use an intersection:
select A, B from Test
intersect
select B, A from Test
The self-join would probably be faster though.
qid & accept id:
(8044345, 8052502)
query:
DBIx::Class : Resultset order_by based upon existence of a value in the list
soup:
ORDER BY expr might be what you're looking for.
\nFor example, here a table:
\nmysql> select * from test;\n+----+-----------+\n| id | name |\n+----+-----------+\n| 1 | London |\n| 2 | Paris |\n| 3 | Tokio |\n| 4 | Rome |\n| 5 | Amsterdam |\n+----+-----------+\n
\nHere the special ordering:
\nmysql> select * from test order by name = 'London' desc, \n name = 'Paris' desc, \n name = 'Amsterdam' desc;\n+----+-----------+\n| id | name |\n+----+-----------+\n| 1 | London |\n| 2 | Paris |\n| 5 | Amsterdam |\n| 3 | Tokio |\n| 4 | Rome |\n+----+-----------+\n
\nTranslating this into a ResultSet method:
\n$schema->resultset('Test')->search(\n {},\n {order_by => {-desc => q[name in ('London', 'New York', 'Tokyo')] }}\n);\n
\n
soup wrap:
ORDER BY expr might be what you're looking for.
For example, here a table:
mysql> select * from test;
+----+-----------+
| id | name |
+----+-----------+
| 1 | London |
| 2 | Paris |
| 3 | Tokio |
| 4 | Rome |
| 5 | Amsterdam |
+----+-----------+
Here the special ordering:
mysql> select * from test order by name = 'London' desc,
name = 'Paris' desc,
name = 'Amsterdam' desc;
+----+-----------+
| id | name |
+----+-----------+
| 1 | London |
| 2 | Paris |
| 5 | Amsterdam |
| 3 | Tokio |
| 4 | Rome |
+----+-----------+
Translating this into a ResultSet method:
$schema->resultset('Test')->search(
{},
{order_by => {-desc => q[name in ('London', 'New York', 'Tokyo')] }}
);
qid & accept id:
(8046345, 8046509)
query:
Conditional GROUP BY and additional columns?
soup:
Table X must have at least five columns whose names, we can presume, are a, b, c, x, y.
\nIf you are doing a single INSERT, then you'll need to insert into all five columns. If you are doing multiple INSERT operations, you can insert into 3 and then 5 (or vice versa) columns. You may have to do some juggling with the NULL values in the select-list of the first alternative. I'm assuming that the columns x and y are INTEGER for definiteness - choose the appropriate type.
\n1st Alternative
\nINSERT INTO x(a, b, c, x, y)\n SELECT a, b, c, MAX(CAST(NULL AS INTEGER)) AS x, MAX(CAST(NULL AS INTEGER)) AS y\n FROM pqr\n WHERE p_a IS NULL\n GROUP BY a, b, c\n UNION\n SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y\n FROM pqr\n WHERE p_a IS NOT NULL\n GROUP BY x, y;\n
\nYou could replace the GROUP BY a, b, c clause with a DISTINCT in front of a in the select-list of the first part of the UNION. In most SQL DBMS, you must list all the non-aggregate columns from the select-list in the GROUP BY clause. Using the MAX means that you have aggregates for x and y in the first half of the UNION and for a, b and c in the second half of the UNION.
\n2nd Alternative
\nINSERT INTO x(a, b, c)\n SELECT DISTINCT a, b, c\n FROM pqr\n WHERE p_a IS NULL;\nINSERT INTO x(a, b, c, x, y)\n SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y\n FROM pqr\n WHERE p_a IS NOT NULL\n GROUP BY x, y;\n
\nAs discussed before, you need aggregates on the columns not in the GROUP BY list.
\n3rd Alternative
\nIf you meant that you must group by x and y as well as a, b and c, then the second half of the UNION (or the second SELECT) simplifies to:
\n SELECT a, b, c, x, y\n FROM pqr\n WHERE p_a IS NOT NULL\n GROUP BY a, b, c, x, y;\n
\nOr you can use DISTINCT again:
\n SELECT DISTINCT a, b, c, x, y\n FROM pqr\n WHERE p_a IS NOT NULL;\n
\n
soup wrap:
Table X must have at least five columns whose names, we can presume, are a, b, c, x, y.
If you are doing a single INSERT, then you'll need to insert into all five columns. If you are doing multiple INSERT operations, you can insert into 3 and then 5 (or vice versa) columns. You may have to do some juggling with the NULL values in the select-list of the first alternative. I'm assuming that the columns x and y are INTEGER for definiteness - choose the appropriate type.
1st Alternative
INSERT INTO x(a, b, c, x, y)
SELECT a, b, c, MAX(CAST(NULL AS INTEGER)) AS x, MAX(CAST(NULL AS INTEGER)) AS y
FROM pqr
WHERE p_a IS NULL
GROUP BY a, b, c
UNION
SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y
FROM pqr
WHERE p_a IS NOT NULL
GROUP BY x, y;
You could replace the GROUP BY a, b, c clause with a DISTINCT in front of a in the select-list of the first part of the UNION. In most SQL DBMS, you must list all the non-aggregate columns from the select-list in the GROUP BY clause. Using the MAX means that you have aggregates for x and y in the first half of the UNION and for a, b and c in the second half of the UNION.
2nd Alternative
INSERT INTO x(a, b, c)
SELECT DISTINCT a, b, c
FROM pqr
WHERE p_a IS NULL;
INSERT INTO x(a, b, c, x, y)
SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y
FROM pqr
WHERE p_a IS NOT NULL
GROUP BY x, y;
As discussed before, you need aggregates on the columns not in the GROUP BY list.
3rd Alternative
If you meant that you must group by x and y as well as a, b and c, then the second half of the UNION (or the second SELECT) simplifies to:
SELECT a, b, c, x, y
FROM pqr
WHERE p_a IS NOT NULL
GROUP BY a, b, c, x, y;
Or you can use DISTINCT again:
SELECT DISTINCT a, b, c, x, y
FROM pqr
WHERE p_a IS NOT NULL;
qid & accept id:
(8046386, 8047626)
query:
sum of time based on flag from multiple rows SQL Server
soup:
If there was an additional criterion for us to distinguish contiguous sequences of events with identical ign values from one another, we could take from each sequence with ign=1 its earliest event and link it with the earliest event of the corresponding ign=0 sequence.
\nIt is possible to add such a criterion, as you will see below. I'm going to post the solution first, then explain how it works.
\nFirst, the setup:
\nDECLARE @atable TABLE (\n Id int IDENTITY,\n UnitId int,\n eventtime datetime,\n ign bit\n);\nINSERT INTO @atable (UnitId, eventtime, ign)\nSELECT 356, '2011-05-04 10:41:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:42:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:43:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:45:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:47:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:48:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 14:49:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:51:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 20:52:00.000', 0;\n
\nAnd now the query:
\nWITH\nmarked AS (\n SELECT\n *,\n Grp = ROW_NUMBER() OVER (PARTITION BY UnitId ORDER BY eventtime) -\n ROW_NUMBER() OVER (PARTITION BY UnitId, ign ORDER BY eventtime)\n FROM @atable\n),\nranked AS (\n SELECT\n *,\n seqRank = DENSE_RANK() OVER (PARTITION BY UnitId, ign ORDER BY Grp),\n eventRank = ROW_NUMBER() OVER (PARTITION BY UnitId, ign, Grp ORDER BY eventtime)\n FROM marked\n),\nfinal AS (\n SELECT\n s.UnitId,\n EventStart = s.eventtime,\n EventEnd = e.eventtime\n FROM ranked s\n INNER JOIN ranked e ON s.UnitId = e.UnitId AND s.seqRank = e.seqRank\n WHERE s.ign = 1\n AND e.ign = 0\n AND s.eventRank = 1\n AND e.eventRank = 1\n)\nSELECT *\nFROM final\nORDER BY\n UnitId,\n EventStart\n
\nThis is how it works.
\nThe marked common table expression (CTE) provides us with the additional criterion I was talking about at the beginning. The result set it produces looks like this:
\nId UnitId eventtime ign Grp\n-- ------ ----------------------- --- ---\n1 356 2011-05-04 10:41:00.000 1 0\n2 356 2011-05-04 10:42:00.000 1 0\n3 356 2011-05-04 10:43:00.000 1 0\n4 356 2011-05-04 10:45:00.000 1 0\n5 356 2011-05-04 10:47:00.000 1 0\n6 356 2011-05-04 10:48:00.000 0 5\n7 356 2011-05-04 11:14:00.000 1 1\n8 356 2011-05-04 11:14:00.000 1 1\n9 356 2011-05-04 11:15:00.000 1 1\n10 356 2011-05-04 11:15:00.000 1 1\n11 356 2011-05-04 11:15:00.000 1 1\n12 356 2011-05-04 11:16:00.000 0 10\n13 356 2011-05-04 11:16:00.000 0 10\n14 356 2011-05-04 11:16:00.000 0 10\n15 356 2011-05-04 14:49:00.000 1 4\n16 356 2011-05-04 14:50:00.000 1 4\n17 356 2011-05-04 14:50:00.000 1 4\n18 356 2011-05-04 14:51:00.000 1 4\n19 356 2011-05-04 14:52:00.000 0 14\n20 356 2011-05-04 14:52:00.000 0 14\n21 356 2011-05-04 20:52:00.000 0 14\n
\nYou can see for yourself how every sequence of events with identical ign can now be easily distinguished from the others by its own key of (UnitId, ign, Grp). So now we can rank every sequence as well as every event within a sequence, which is what the ranked CTE does. It produces the following result set:
\nId UnitId eventtime ign Grp seqRank eventRank\n-- ------ ----------------------- --- --- ------- ---------\n1 356 2011-05-04 10:41:00.000 1 0 1 1\n2 356 2011-05-04 10:42:00.000 1 0 1 2\n3 356 2011-05-04 10:43:00.000 1 0 1 3\n4 356 2011-05-04 10:45:00.000 1 0 1 4\n5 356 2011-05-04 10:47:00.000 1 0 1 5\n6 356 2011-05-04 10:48:00.000 0 5 1 1\n7 356 2011-05-04 11:14:00.000 1 1 2 1\n8 356 2011-05-04 11:14:00.000 1 1 2 2\n9 356 2011-05-04 11:15:00.000 1 1 2 3\n10 356 2011-05-04 11:15:00.000 1 1 2 4\n11 356 2011-05-04 11:15:00.000 1 1 2 5\n12 356 2011-05-04 11:16:00.000 0 10 2 1\n13 356 2011-05-04 11:16:00.000 0 10 2 2\n14 356 2011-05-04 11:16:00.000 0 10 2 3\n15 356 2011-05-04 14:49:00.000 1 4 3 1\n16 356 2011-05-04 14:50:00.000 1 4 3 2\n17 356 2011-05-04 14:50:00.000 1 4 3 3\n18 356 2011-05-04 14:51:00.000 1 4 3 4\n19 356 2011-05-04 14:52:00.000 0 14 3 1\n20 356 2011-05-04 14:52:00.000 0 14 3 2\n21 356 2011-05-04 20:52:00.000 0 14 3 3\n
\nYou can see that an ign=1 sequence can now be matched with an ign=0 sequence with the help of seqRank. And picking only the earliest event from every sequence (filtering by eventRank=1) we'll get start and end times of all the ign=1 sequences. And so the result of the final CTE is:
\nUnitId EventStart EventEnd\n------ ----------------------- -----------------------\n356 2011-05-04 10:41:00.000 2011-05-04 10:48:00.000\n356 2011-05-04 11:14:00.000 2011-05-04 11:16:00.000\n356 2011-05-04 14:49:00.000 2011-05-04 14:52:00.000\n
\nObviously, if the last ign=1 sequence isn't followed by an ign=0 event, it will not be shown in the final results, because the last ign=1 sequence will have no matching ign=0 sequence, using the above approach.
\nThere's one possible case when this query will not work as it is. It's when the event list starts with an ign=0 event instead of ign=1. If that is actually possible, you could simply add the following filter to the ranked CTE:
\nWHERE NOT (ign = 0 AND Grp = 0)\n-- Alternatively: WHERE ign <> 0 OR Grp <> 0\n
\nIt takes advantage of the fact that the first value of Grp will always be 0. So, if 0 is assigned to events with ign=0, those events should be excluded.
\n
\nUseful reading:
\n\n \n- \n
\n
\n
soup wrap:
If there was an additional criterion for us to distinguish contiguous sequences of events with identical ign values from one another, we could take from each sequence with ign=1 its earliest event and link it with the earliest event of the corresponding ign=0 sequence.
It is possible to add such a criterion, as you will see below. I'm going to post the solution first, then explain how it works.
First, the setup:
DECLARE @atable TABLE (
Id int IDENTITY,
UnitId int,
eventtime datetime,
ign bit
);
INSERT INTO @atable (UnitId, eventtime, ign)
SELECT 356, '2011-05-04 10:41:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 10:42:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 10:43:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 10:45:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 10:47:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 10:48:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 14:49:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 14:51:00.000', 1 UNION ALL
SELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL
SELECT 356, '2011-05-04 20:52:00.000', 0;
And now the query:
WITH
marked AS (
SELECT
*,
Grp = ROW_NUMBER() OVER (PARTITION BY UnitId ORDER BY eventtime) -
ROW_NUMBER() OVER (PARTITION BY UnitId, ign ORDER BY eventtime)
FROM @atable
),
ranked AS (
SELECT
*,
seqRank = DENSE_RANK() OVER (PARTITION BY UnitId, ign ORDER BY Grp),
eventRank = ROW_NUMBER() OVER (PARTITION BY UnitId, ign, Grp ORDER BY eventtime)
FROM marked
),
final AS (
SELECT
s.UnitId,
EventStart = s.eventtime,
EventEnd = e.eventtime
FROM ranked s
INNER JOIN ranked e ON s.UnitId = e.UnitId AND s.seqRank = e.seqRank
WHERE s.ign = 1
AND e.ign = 0
AND s.eventRank = 1
AND e.eventRank = 1
)
SELECT *
FROM final
ORDER BY
UnitId,
EventStart
This is how it works.
The marked common table expression (CTE) provides us with the additional criterion I was talking about at the beginning. The result set it produces looks like this:
Id UnitId eventtime ign Grp
-- ------ ----------------------- --- ---
1 356 2011-05-04 10:41:00.000 1 0
2 356 2011-05-04 10:42:00.000 1 0
3 356 2011-05-04 10:43:00.000 1 0
4 356 2011-05-04 10:45:00.000 1 0
5 356 2011-05-04 10:47:00.000 1 0
6 356 2011-05-04 10:48:00.000 0 5
7 356 2011-05-04 11:14:00.000 1 1
8 356 2011-05-04 11:14:00.000 1 1
9 356 2011-05-04 11:15:00.000 1 1
10 356 2011-05-04 11:15:00.000 1 1
11 356 2011-05-04 11:15:00.000 1 1
12 356 2011-05-04 11:16:00.000 0 10
13 356 2011-05-04 11:16:00.000 0 10
14 356 2011-05-04 11:16:00.000 0 10
15 356 2011-05-04 14:49:00.000 1 4
16 356 2011-05-04 14:50:00.000 1 4
17 356 2011-05-04 14:50:00.000 1 4
18 356 2011-05-04 14:51:00.000 1 4
19 356 2011-05-04 14:52:00.000 0 14
20 356 2011-05-04 14:52:00.000 0 14
21 356 2011-05-04 20:52:00.000 0 14
You can see for yourself how every sequence of events with identical ign can now be easily distinguished from the others by its own key of (UnitId, ign, Grp). So now we can rank every sequence as well as every event within a sequence, which is what the ranked CTE does. It produces the following result set:
Id UnitId eventtime ign Grp seqRank eventRank
-- ------ ----------------------- --- --- ------- ---------
1 356 2011-05-04 10:41:00.000 1 0 1 1
2 356 2011-05-04 10:42:00.000 1 0 1 2
3 356 2011-05-04 10:43:00.000 1 0 1 3
4 356 2011-05-04 10:45:00.000 1 0 1 4
5 356 2011-05-04 10:47:00.000 1 0 1 5
6 356 2011-05-04 10:48:00.000 0 5 1 1
7 356 2011-05-04 11:14:00.000 1 1 2 1
8 356 2011-05-04 11:14:00.000 1 1 2 2
9 356 2011-05-04 11:15:00.000 1 1 2 3
10 356 2011-05-04 11:15:00.000 1 1 2 4
11 356 2011-05-04 11:15:00.000 1 1 2 5
12 356 2011-05-04 11:16:00.000 0 10 2 1
13 356 2011-05-04 11:16:00.000 0 10 2 2
14 356 2011-05-04 11:16:00.000 0 10 2 3
15 356 2011-05-04 14:49:00.000 1 4 3 1
16 356 2011-05-04 14:50:00.000 1 4 3 2
17 356 2011-05-04 14:50:00.000 1 4 3 3
18 356 2011-05-04 14:51:00.000 1 4 3 4
19 356 2011-05-04 14:52:00.000 0 14 3 1
20 356 2011-05-04 14:52:00.000 0 14 3 2
21 356 2011-05-04 20:52:00.000 0 14 3 3
You can see that an ign=1 sequence can now be matched with an ign=0 sequence with the help of seqRank. And picking only the earliest event from every sequence (filtering by eventRank=1) we'll get start and end times of all the ign=1 sequences. And so the result of the final CTE is:
UnitId EventStart EventEnd
------ ----------------------- -----------------------
356 2011-05-04 10:41:00.000 2011-05-04 10:48:00.000
356 2011-05-04 11:14:00.000 2011-05-04 11:16:00.000
356 2011-05-04 14:49:00.000 2011-05-04 14:52:00.000
Obviously, if the last ign=1 sequence isn't followed by an ign=0 event, it will not be shown in the final results, because the last ign=1 sequence will have no matching ign=0 sequence, using the above approach.
There's one possible case when this query will not work as it is. It's when the event list starts with an ign=0 event instead of ign=1. If that is actually possible, you could simply add the following filter to the ranked CTE:
WHERE NOT (ign = 0 AND Grp = 0)
-- Alternatively: WHERE ign <> 0 OR Grp <> 0
It takes advantage of the fact that the first value of Grp will always be 0. So, if 0 is assigned to events with ign=0, those events should be excluded.
Useful reading:
-
qid & accept id:
(8073455, 8073490)
query:
How can I Determine Date of Import from MySQL?
soup:
A quick way is to check the create_time or update_time when you execute this command:
\nshow table status;\n
\nlike the following example:
\n+--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+\n| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |\n+--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+\n| a_table | MyISAM | 10 | Dynamic | 2 | 60 | 120 | 281474976710655 | 1024 | 0 | NULL | 2011-09-08 18:26:38 | 2011-11-07 20:38:28 | NULL | latin1_swedish_ci | NULL | | |\n
\n
soup wrap:
A quick way is to check the create_time or update_time when you execute this command:
show table status;
like the following example:
+--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
| a_table | MyISAM | 10 | Dynamic | 2 | 60 | 120 | 281474976710655 | 1024 | 0 | NULL | 2011-09-08 18:26:38 | 2011-11-07 20:38:28 | NULL | latin1_swedish_ci | NULL | | |
qid & accept id:
(8108295, 8160666)
query:
Intersection of sets
soup:
So here's what I've come up with:
\n$this->Sql = 'SELECT DISTINCT * FROM `nodes` `n`\n JOIN `tagged_nodes` `t` ON t.nid=n.nid';\n\n $i=0;\nforeach( $tagids as $tagid ) {\n $t = 't' . $i++;\n $this->Sql .= ' INNER JOIN `tagged_nodes` `'.$t.'` ON '\n .$t'.tid=t.tid WHERE '.$t.'.tid='.$tagid;\n}\n
\nIt's in PHP since I need it to be dynamic, but it would basically be the following if I needed, say, only 2 tags (animals, pets).
\nSELECT * FROM nodes n JOIN tagged_nodes t ON t.nid=n.nid\nINNER JOIN tagged_nodes t1 ON t1.tid=t.tid WHERE t1.tid='animals'\nINNER JOIN tagged_nodes t2 ON t2.tid=t.tid WHERE t2.tid='pets'\n
\nAm I on the right track?
\n
soup wrap:
So here's what I've come up with:
$this->Sql = 'SELECT DISTINCT * FROM `nodes` `n`
JOIN `tagged_nodes` `t` ON t.nid=n.nid';
$i=0;
foreach( $tagids as $tagid ) {
$t = 't' . $i++;
$this->Sql .= ' INNER JOIN `tagged_nodes` `'.$t.'` ON '
.$t'.tid=t.tid WHERE '.$t.'.tid='.$tagid;
}
It's in PHP since I need it to be dynamic, but it would basically be the following if I needed, say, only 2 tags (animals, pets).
SELECT * FROM nodes n JOIN tagged_nodes t ON t.nid=n.nid
INNER JOIN tagged_nodes t1 ON t1.tid=t.tid WHERE t1.tid='animals'
INNER JOIN tagged_nodes t2 ON t2.tid=t.tid WHERE t2.tid='pets'
Am I on the right track?
qid & accept id:
(8110165, 8110179)
query:
Removing duplicate foreign key rows in MySQL database
soup:
Assuming your School table has a store_ID from what you've said.
\nI would start by figuring out for each duplicate, which store_ID you want to keep. I will also assume that you want it to be the lowest ID value. I would then update the Schools' store_ID to be the MIN(store_ID) for the current URL they have. You should then be free to delete the extra store_ID records
\nThis is how I would go about the update:
\nUPDATE sch\nSET sch.Store_ID = matcher.store_ID\nFROM Schools AS sch\nINNER JOIN Stores AS st ON sch.store_ID = st.store_ID\nINNER JOIN\n(\n SELECT MIN(st.store_id) AS store_ID, store_url\n FROM Schools AS sch\n INNER JOIN Stores AS st ON sch.store_ID = st.store_ID\n GROUP BY Store_URL\n) AS matcher ON st.Store_URL = matcher.Store_Url\n AND st.Store_ID != matcher.store_ID\n
\nIf you are able to delete stores that do not have an associated school, the following query will remove the extra rows:
\nDELETE FROM st\nFROM Stores AS st\nLEFT JOIN Schools AS sch ON st.Store_ID = sch.Store_Id\nWHERE sch.Store_id IS NULL\n
\nIf you only want to delete the Store's duplicate records, I would look at this query instead of the above:
\nDELETE FROM st\nFROM Stores AS st\nINNER JOIN\n(\n SELECT MIN(st.store_ID) store_Id, st.Store_Url\n FROM Stores AS st\n GROUP BY st.Store_URL\n) AS useful ON st.Store_Url = useful.Store_URL\nWHERE st.Store_ID != useful.store_Id\n
\n
soup wrap:
Assuming your School table has a store_ID from what you've said.
I would start by figuring out for each duplicate, which store_ID you want to keep. I will also assume that you want it to be the lowest ID value. I would then update the Schools' store_ID to be the MIN(store_ID) for the current URL they have. You should then be free to delete the extra store_ID records
This is how I would go about the update:
UPDATE sch
SET sch.Store_ID = matcher.store_ID
FROM Schools AS sch
INNER JOIN Stores AS st ON sch.store_ID = st.store_ID
INNER JOIN
(
SELECT MIN(st.store_id) AS store_ID, store_url
FROM Schools AS sch
INNER JOIN Stores AS st ON sch.store_ID = st.store_ID
GROUP BY Store_URL
) AS matcher ON st.Store_URL = matcher.Store_Url
AND st.Store_ID != matcher.store_ID
If you are able to delete stores that do not have an associated school, the following query will remove the extra rows:
DELETE FROM st
FROM Stores AS st
LEFT JOIN Schools AS sch ON st.Store_ID = sch.Store_Id
WHERE sch.Store_id IS NULL
If you only want to delete the Store's duplicate records, I would look at this query instead of the above:
DELETE FROM st
FROM Stores AS st
INNER JOIN
(
SELECT MIN(st.store_ID) store_Id, st.Store_Url
FROM Stores AS st
GROUP BY st.Store_URL
) AS useful ON st.Store_Url = useful.Store_URL
WHERE st.Store_ID != useful.store_Id
qid & accept id:
(8111247, 8113193)
query:
How to move a DB2 SQL result table into a physical file?
soup:
If you want to create the table automatically you can also use the following form:
\nCREATE TABLE new_table_name \nAS (SELECT * FROM \n UNION SELECT * FROM ) WITH DATA\n
\nNote that you can create a view over the query to dynamically build the result set on demand. The view can then be referenced from any HLL as a logical file:
\nCREATE VIEW new_table_name\nAS SELECT * FROM \n UNION SELECT * FROM \n
\n
soup wrap:
If you want to create the table automatically you can also use the following form:
CREATE TABLE new_table_name
AS (SELECT * FROM
UNION SELECT * FROM ) WITH DATA
Note that you can create a view over the query to dynamically build the result set on demand. The view can then be referenced from any HLL as a logical file:
CREATE VIEW new_table_name
AS SELECT * FROM
UNION SELECT * FROM
qid & accept id:
(8128360, 8153266)
query:
using multiple left outer joins pl/sql
soup:
Okay, so after taking Wolf's suggestion, i went in and ran the following line of code
\n select categorytype, count(*) \nfrom nptcategories \ngroup by categorytype \nhaving count(*) > 1;\n
\nAfter running this, i found that somehow there were duplicates of records in this table so, this was fixed by removing the duplicates and setting the table to have unique ids. This was done by using the following script on the DB:
\nalter table nptcategories add constraint nptcatidunq unique(categoryid)\n
\n
soup wrap:
Okay, so after taking Wolf's suggestion, i went in and ran the following line of code
select categorytype, count(*)
from nptcategories
group by categorytype
having count(*) > 1;
After running this, i found that somehow there were duplicates of records in this table so, this was fixed by removing the duplicates and setting the table to have unique ids. This was done by using the following script on the DB:
alter table nptcategories add constraint nptcatidunq unique(categoryid)
qid & accept id:
(8139699, 8139768)
query:
select records that don't have certain values in 2 columns
soup:
SELECT\n *\nFROM\n MyTable AS data\nLEFT JOIN\n (SELECT x, y, z FROM UpdateMyTable) AS check\n ON data.x = check.x\n AND data.y = check.y\n AND data.z = check.z\nWHERE\n x = @x\n AND check.x IS NULL\n
\n
\nOR
\nSELECT\n *\nFROM\n MyTable AS data\nWHERE\n x = @x\n AND NOT EXISTS (\n SELECT\n *\n FROM\n UpdateMyTable AS check\n WHERE\n data.x = check.x\n AND data.y = check.y\n AND data.z = check.z\n )\n
\n
soup wrap:
SELECT
*
FROM
MyTable AS data
LEFT JOIN
(SELECT x, y, z FROM UpdateMyTable) AS check
ON data.x = check.x
AND data.y = check.y
AND data.z = check.z
WHERE
x = @x
AND check.x IS NULL
OR
SELECT
*
FROM
MyTable AS data
WHERE
x = @x
AND NOT EXISTS (
SELECT
*
FROM
UpdateMyTable AS check
WHERE
data.x = check.x
AND data.y = check.y
AND data.z = check.z
)
qid & accept id:
(8143581, 8143720)
query:
Extract first numeric part of field
soup:
Try this:
\nSELECT substring(address, '^\\d+') AS heading_number\nFROM tbl\nWHERE zip = 12345\nAND address ILIKE '3%'\n
\nReturns 1 or more digits from the start of the string.
\nLeave out the anchor ^ if you want the first sequence of digits in the string instead of the sequence at the start. Example:
\nSELECT substring('South 13rd street 3452435 foo', '\\d+');\n
\nRead about substring() and regular expressions in the manual.
\nIn more recent versions (8.0+), don't forget to use for escape string syntax like this:
\nSELECT substring('South 13rd street 3452435 foo', E'\\d+');\n
\n
soup wrap:
Try this:
SELECT substring(address, '^\\d+') AS heading_number
FROM tbl
WHERE zip = 12345
AND address ILIKE '3%'
Returns 1 or more digits from the start of the string.
Leave out the anchor ^ if you want the first sequence of digits in the string instead of the sequence at the start. Example:
SELECT substring('South 13rd street 3452435 foo', '\\d+');
Read about substring() and regular expressions in the manual.
In more recent versions (8.0+), don't forget to use for escape string syntax like this:
SELECT substring('South 13rd street 3452435 foo', E'\\d+');
qid & accept id:
(8153000, 8153092)
query:
Count Events/year in SQL
soup:
Look into date functions on mysql http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_datediff
\nYou can use datediff which will give you difference in days. Ex;
\nWHERE abs(datediff(now(), event_date)) < 365*5
\nor dateadd(), if your event dates are timestamps, use timestampdiff()
\nSample query
\nSELECT count(*) FROM mytable\nWHERE abs(datediff(now(), event_date)) < 365*5\n
\nUPDATE
\nbased on some of the comments I've read here, here's a query for you
\nSELECT year(event_date) as event_year, count(event_date)\nFROM mytable\nWHERE\nabs(datediff(now(), event_date)) < 365*5\nGROUP by year(event_date)\n
\nFeel free to adjust 5 in (365 * 5) for different range
\nUPDATE 2
\nThis is NOT very pretty but you can try this with pure mysql. You can also modify this to be a stored proc if necessary:
\nSET @y6 = year(now());\nSET @y5 = @y6-1;\nSET @y4 = @y5-1;\nSET @y3 = @y4-1;\nSET @y2 = @y3-1;\nSET @y1 = @y2-1;\n\nSET @y7 = @y6+1;\nSET @y8 = @y7+1;\nSET @y9 = @y8+1;\nSET @y10 = @y9+1;\nSET @y11 = @y10+1;\n\nCREATE TEMPORARY TABLE event_years (event_year int not null);\nINSERT INTO event_years SELECT @y1;\nINSERT INTO event_years SELECT @y2;\nINSERT INTO event_years SELECT @y3;\nINSERT INTO event_years SELECT @y4;\nINSERT INTO event_years SELECT @y5;\nINSERT INTO event_years SELECT @y6;\nINSERT INTO event_years SELECT @y7;\nINSERT INTO event_years SELECT @y8;\nINSERT INTO event_years SELECT @y9;\nINSERT INTO event_years SELECT @y10;\nINSERT INTO event_years SELECT @y11;\n\nSELECT ey.event_year , (SELECT count(event_date) from mytable where year(event_date) = ey.event_year)\nfrom event_years ey;\n
\ntemporary table will get dropped by itself after your connection is closed. If you add DROP TABLE after SELECT, you might not get your results back.
\n
soup wrap:
Look into date functions on mysql http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_datediff
You can use datediff which will give you difference in days. Ex;
WHERE abs(datediff(now(), event_date)) < 365*5
or dateadd(), if your event dates are timestamps, use timestampdiff()
Sample query
SELECT count(*) FROM mytable
WHERE abs(datediff(now(), event_date)) < 365*5
UPDATE
based on some of the comments I've read here, here's a query for you
SELECT year(event_date) as event_year, count(event_date)
FROM mytable
WHERE
abs(datediff(now(), event_date)) < 365*5
GROUP by year(event_date)
Feel free to adjust 5 in (365 * 5) for different range
UPDATE 2
This is NOT very pretty but you can try this with pure mysql. You can also modify this to be a stored proc if necessary:
SET @y6 = year(now());
SET @y5 = @y6-1;
SET @y4 = @y5-1;
SET @y3 = @y4-1;
SET @y2 = @y3-1;
SET @y1 = @y2-1;
SET @y7 = @y6+1;
SET @y8 = @y7+1;
SET @y9 = @y8+1;
SET @y10 = @y9+1;
SET @y11 = @y10+1;
CREATE TEMPORARY TABLE event_years (event_year int not null);
INSERT INTO event_years SELECT @y1;
INSERT INTO event_years SELECT @y2;
INSERT INTO event_years SELECT @y3;
INSERT INTO event_years SELECT @y4;
INSERT INTO event_years SELECT @y5;
INSERT INTO event_years SELECT @y6;
INSERT INTO event_years SELECT @y7;
INSERT INTO event_years SELECT @y8;
INSERT INTO event_years SELECT @y9;
INSERT INTO event_years SELECT @y10;
INSERT INTO event_years SELECT @y11;
SELECT ey.event_year , (SELECT count(event_date) from mytable where year(event_date) = ey.event_year)
from event_years ey;
temporary table will get dropped by itself after your connection is closed. If you add DROP TABLE after SELECT, you might not get your results back.
qid & accept id:
(8159093, 8159213)
query:
How can you order like items in a nested set hierarchical structure?
soup:
I'm still a little unclear on what you are asking, but it appears you can get your desired result set with the following query:
\nSELECT distinct 'Junior' as Database, \n xType, \n displayLabel, \n child_xType, \n child_displayLabel\nFROM MyTable\nORDER BY displayLabel DESC, child_displayLabel ASC\n
\nUPDATE:
\nI'm still confused after your last comment but give this a try
\nSELECT 'Junior' as Database, \n xType, \n displayLabel, \n child_xType, \n child_displayLabel\nFROM MyTable\nGROUP BY xType, displayLabel, child_xType, child_displayLabel\nORDER BY min(lft1), min(lft2)\n
\n
soup wrap:
I'm still a little unclear on what you are asking, but it appears you can get your desired result set with the following query:
SELECT distinct 'Junior' as Database,
xType,
displayLabel,
child_xType,
child_displayLabel
FROM MyTable
ORDER BY displayLabel DESC, child_displayLabel ASC
UPDATE:
I'm still confused after your last comment but give this a try
SELECT 'Junior' as Database,
xType,
displayLabel,
child_xType,
child_displayLabel
FROM MyTable
GROUP BY xType, displayLabel, child_xType, child_displayLabel
ORDER BY min(lft1), min(lft2)
qid & accept id:
(8216437, 8216634)
query:
SQL: Remove duplicates
soup:
A textbook candidate for the window function row_number():
\n;WITH x AS (\n SELECT unique_ID\n ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn\n FROM tbl\n )\nDELETE FROM tbl\nFROM x\nWHERE tbl.unique_ID = x.unique_ID\nAND x.rn > 1\n
\nThis also takes care of the situation where a set of dupes on (worker_ID,type_ID) shares the same date.
\nSee the simplified demo on data.SE.
\nUpdate with simpler version
\nTurns out, this can be simplified: In SQL Server you can delete from the CTE directly:
\n;WITH x AS (\n SELECT unique_ID\n ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn\n FROM tbl\n )\nDELETE x\nWHERE rn > 1\n
\n
soup wrap:
A textbook candidate for the window function row_number():
;WITH x AS (
SELECT unique_ID
,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn
FROM tbl
)
DELETE FROM tbl
FROM x
WHERE tbl.unique_ID = x.unique_ID
AND x.rn > 1
This also takes care of the situation where a set of dupes on (worker_ID,type_ID) shares the same date.
See the simplified demo on data.SE.
Update with simpler version
Turns out, this can be simplified: In SQL Server you can delete from the CTE directly:
;WITH x AS (
SELECT unique_ID
,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn
FROM tbl
)
DELETE x
WHERE rn > 1
qid & accept id:
(8223650, 8223684)
query:
How do I get all rows that contains a string in a field (SQL)?
soup:
you could use the like operator
\nselect * from articles where tag like '%php%'\n
\nif you are worried about tags which are not php but have php in them like say phphp then you can use with comma
\nselect * from articles where tag like '%php,%' or tag like '%,php%'\n
\n
soup wrap:
you could use the like operator
select * from articles where tag like '%php%'
if you are worried about tags which are not php but have php in them like say phphp then you can use with comma
select * from articles where tag like '%php,%' or tag like '%,php%'
qid & accept id:
(8256892, 8257056)
query:
Converting normal datetime to a time zone in sql server 2008
soup:
Cast it to dtaetimeoffset like
\nselect CAST(dt as datetimeoffset) from test\n
\nEDIT:
\nyou can then use SWITCHOFFSET to get into the specified timezone. For your example
\nselect switchoffset(CAST(dt as datetimeoffset),'+05:30') from test \n
\nResults in 2011-11-24 23:26:30.0600000 +05:30
\n
soup wrap:
Cast it to dtaetimeoffset like
select CAST(dt as datetimeoffset) from test
EDIT:
you can then use SWITCHOFFSET to get into the specified timezone. For your example
select switchoffset(CAST(dt as datetimeoffset),'+05:30') from test
Results in 2011-11-24 23:26:30.0600000 +05:30
qid & accept id:
(8276553, 8276938)
query:
Figure out the last item of a group of items in SQL
soup:
You did not state your DBMS, but the following is an ANSI compliant SQL (should work on PosgreSQL, Oracle, DB2)
\nSELECT *\nFROM (\n SELECT listid, \n itemid,\n case \n when lead(itemid) over (partition by listid order by itemid) is null then 'last'\n else 'not_last'\n end as last_flag\n FROM items_tbl\n WHERE listID = 'List_1'\n) t\nWHERE itemID = 'item_2'\n
\nEdit, the following should work on SQL Server (as that doesn't yet support lead()):
\nSELECT listid, \n itemid,\n case \n when rn = list_count the 'last'\n else 'not_last'\n end\nFROM (\n SELECT listid, \n itemid,\n row_number() over (partition by listid order by itemid) as rn,\n count(*) over (partition by listid) as list_count\n FROM items_tbl\n WHERE listID = 'List_1'\n) t\nWHERE itemID = 'item_2'\n
\n
soup wrap:
You did not state your DBMS, but the following is an ANSI compliant SQL (should work on PosgreSQL, Oracle, DB2)
SELECT *
FROM (
SELECT listid,
itemid,
case
when lead(itemid) over (partition by listid order by itemid) is null then 'last'
else 'not_last'
end as last_flag
FROM items_tbl
WHERE listID = 'List_1'
) t
WHERE itemID = 'item_2'
Edit, the following should work on SQL Server (as that doesn't yet support lead()):
SELECT listid,
itemid,
case
when rn = list_count the 'last'
else 'not_last'
end
FROM (
SELECT listid,
itemid,
row_number() over (partition by listid order by itemid) as rn,
count(*) over (partition by listid) as list_count
FROM items_tbl
WHERE listID = 'List_1'
) t
WHERE itemID = 'item_2'
qid & accept id:
(8293350, 8293373)
query:
SQL query to join columns in result
soup:
You should try this:
\nSELECT que.*, opt.* FROM questions que\nINNER JOIN options opt ON que.queid = opt.queid\nWHERE que.queid = 1\n
\nINNER JOIN loads questions and options having at least one corresponing record in every table.
\nIf you need to get all questions (even the ones not having options) you could use
\nSELECT que.*, opt.* FROM questions que\nLEFT JOIN options opt ON que.queid = opt.queid\nWHERE que.queid = 1\n
\nLEFT JOIN always loads questions and, if they have options, their options too; if not you get NULL for options columns.
\n
soup wrap:
You should try this:
SELECT que.*, opt.* FROM questions que
INNER JOIN options opt ON que.queid = opt.queid
WHERE que.queid = 1
INNER JOIN loads questions and options having at least one corresponing record in every table.
If you need to get all questions (even the ones not having options) you could use
SELECT que.*, opt.* FROM questions que
LEFT JOIN options opt ON que.queid = opt.queid
WHERE que.queid = 1
LEFT JOIN always loads questions and, if they have options, their options too; if not you get NULL for options columns.
qid & accept id:
(8306044, 8306124)
query:
SQL - Summing events by date (5 days at a time)
soup:
You can try something like this:
\nselect\n Date,\n (select sum(events)\n from tablename d2\n where abs(datediff(DAY, d1.Date, d2.Date)) <= 2) as EventCount\nfrom\n tablename d1\nwhere\n Date between '11/03/2011' and '11/07/2011'\n
\nSample output:
\nDate EventCount\n11/03/2011 12\n11/04/2011 9 ** Note that the correct value for w02 is 9, not 7\n11/05/2011 14\n11/06/2011 10\n11/07/2011 14\n
\n
soup wrap:
You can try something like this:
select
Date,
(select sum(events)
from tablename d2
where abs(datediff(DAY, d1.Date, d2.Date)) <= 2) as EventCount
from
tablename d1
where
Date between '11/03/2011' and '11/07/2011'
Sample output:
Date EventCount
11/03/2011 12
11/04/2011 9 ** Note that the correct value for w02 is 9, not 7
11/05/2011 14
11/06/2011 10
11/07/2011 14
qid & accept id:
(8315026, 8315588)
query:
select with condition oracle
soup:
You could use a CASE statement
\nSELECT id\n FROM table\n WHERE age = (CASE WHEN variable = 'aaa' \n THEN 21\n WHEN variable = 'bbb'\n THEN 99\n ELSE null\n END)\n
\nHowever, it may be more efficient and easier to read to just do an OR
\nSELECT id\n FROM table\n WHERE (variable = 'aaa' AND age = 21)\n OR (variable = 'bbb' AND age = 99)\n
\n
soup wrap:
You could use a CASE statement
SELECT id
FROM table
WHERE age = (CASE WHEN variable = 'aaa'
THEN 21
WHEN variable = 'bbb'
THEN 99
ELSE null
END)
However, it may be more efficient and easier to read to just do an OR
SELECT id
FROM table
WHERE (variable = 'aaa' AND age = 21)
OR (variable = 'bbb' AND age = 99)
qid & accept id:
(8327616, 8327659)
query:
Dynamic 'LIKE' Statement in SQL (Oracle)
soup:
You can use the CONCAT() function:
\nSELECT * \nFROM MATERIALS \nWHERE longname LIKE CONCAT(shortname, '%')\n
\nor even better, the standard || (double pipe) operator:
\nSELECT * \nFROM MATERIALS \nWHERE longname LIKE (shortname || '%')\n
\n
\nOracle's CONCAT() function does not take more than 2 arguments so one would use the cumbersome CONCAT(CONCAT(a, b), c) while with the operator it's the simple: a || b || c
\n
soup wrap:
You can use the CONCAT() function:
SELECT *
FROM MATERIALS
WHERE longname LIKE CONCAT(shortname, '%')
or even better, the standard || (double pipe) operator:
SELECT *
FROM MATERIALS
WHERE longname LIKE (shortname || '%')
Oracle's CONCAT() function does not take more than 2 arguments so one would use the cumbersome CONCAT(CONCAT(a, b), c) while with the operator it's the simple: a || b || c
qid & accept id:
(8337138, 8337841)
query:
SQL to get an daily average from month total
soup:
Sample data (may vary):
\nselect * into #totals from (\nselect '1001' as person, 114.00 as total, 199905 as month union\nselect '1001', 120.00, 199906 union\nselect '1001', 120.00, 199907 union\nselect '1001', 120.00, 199908 \n\n) t\n\nselect * into #calendar from (\nselect cast('19990501' as datetime) as tran_date, 'WEEKEND' as day_type union\nselect '19990502', 'WEEKEND' union\nselect '19990503', 'WORKING_DAY' union\nselect '19990504', 'WORKING_DAY' union\nselect '19990505', 'WORKING_DAY' union\nselect '19990601', 'WEEKEND' union\nselect '19990602', 'WORKING_DAY' union\nselect '19990603', 'WORKING_DAY' union\nselect '19990604', 'WORKING_DAY' union\nselect '19990605', 'WORKING_DAY' union\nselect '19990606', 'WORKING_DAY' union\nselect '19990701', 'WORKING_DAY' union\nselect '19990702', 'WEEKEND' union\nselect '19990703', 'WEEKEND' union\nselect '19990704', 'WORKING_DAY' union\nselect '19990801', 'WORKING_DAY' union\nselect '19990802', 'WORKING_DAY' union\nselect '19990803', 'WEEKEND' union\nselect '19990804', 'WEEKEND' union\nselect '19990805', 'WORKING_DAY' union\nselect '19990901', 'WORKING_DAY'\n) t\n
\nSelect statement, it returns 0 if the day is 'weekend' or not exists in calendar table. Please keep in mind that MAXRECURSION is a value between 0 and 32,767.
\n;with dates as ( \n select cast('19990501' as datetime) as tran_date \n union all \n select dateadd(dd, 1, tran_date) \n from dates where dateadd(dd, 1, tran_date) <= cast('20010101' as datetime) \n) \nselect t.person , d.tran_date, (case when wd.tran_date is not null then t.total / w_days else 0 end) as day_avg \nfrom dates d \nleft join #totals t on \n datepart(yy, d.tran_date) * 100 + datepart(mm, d.tran_date) = t.month \nleft join ( \n select datepart(yy, tran_date) * 100 + datepart(mm, tran_date) as month, count(*) as w_days \n from #calendar \n where day_type = 'WORKING_DAY' \n group by datepart(yy, tran_date) * 100 + datepart(mm, tran_date) \n) c on t.month = c.month \nleft join #calendar wd on d.tran_date = wd.tran_date and wd.day_type = 'WORKING_DAY' \nwhere t.person is not null\noption(maxrecursion 20000) \n
\n
soup wrap:
Sample data (may vary):
select * into #totals from (
select '1001' as person, 114.00 as total, 199905 as month union
select '1001', 120.00, 199906 union
select '1001', 120.00, 199907 union
select '1001', 120.00, 199908
) t
select * into #calendar from (
select cast('19990501' as datetime) as tran_date, 'WEEKEND' as day_type union
select '19990502', 'WEEKEND' union
select '19990503', 'WORKING_DAY' union
select '19990504', 'WORKING_DAY' union
select '19990505', 'WORKING_DAY' union
select '19990601', 'WEEKEND' union
select '19990602', 'WORKING_DAY' union
select '19990603', 'WORKING_DAY' union
select '19990604', 'WORKING_DAY' union
select '19990605', 'WORKING_DAY' union
select '19990606', 'WORKING_DAY' union
select '19990701', 'WORKING_DAY' union
select '19990702', 'WEEKEND' union
select '19990703', 'WEEKEND' union
select '19990704', 'WORKING_DAY' union
select '19990801', 'WORKING_DAY' union
select '19990802', 'WORKING_DAY' union
select '19990803', 'WEEKEND' union
select '19990804', 'WEEKEND' union
select '19990805', 'WORKING_DAY' union
select '19990901', 'WORKING_DAY'
) t
Select statement, it returns 0 if the day is 'weekend' or not exists in calendar table. Please keep in mind that MAXRECURSION is a value between 0 and 32,767.
;with dates as (
select cast('19990501' as datetime) as tran_date
union all
select dateadd(dd, 1, tran_date)
from dates where dateadd(dd, 1, tran_date) <= cast('20010101' as datetime)
)
select t.person , d.tran_date, (case when wd.tran_date is not null then t.total / w_days else 0 end) as day_avg
from dates d
left join #totals t on
datepart(yy, d.tran_date) * 100 + datepart(mm, d.tran_date) = t.month
left join (
select datepart(yy, tran_date) * 100 + datepart(mm, tran_date) as month, count(*) as w_days
from #calendar
where day_type = 'WORKING_DAY'
group by datepart(yy, tran_date) * 100 + datepart(mm, tran_date)
) c on t.month = c.month
left join #calendar wd on d.tran_date = wd.tran_date and wd.day_type = 'WORKING_DAY'
where t.person is not null
option(maxrecursion 20000)
qid & accept id:
(8350660, 8350874)
query:
LINQ OrderBy Count of Records in a Joined Table
soup:
You need to execute a group by if you want the count
\nSELECT P.Name\nFROM Product P\n INNER JOIN OrderItems OI ON P.productID = OI.productID\n INNER JOIN Orders O ON OI.orderID = O.orderId\nWHERE P.Active = 1 AND O.Status > 2\nGROUP BY P.Name\nORDER BY count(*) DESC\n
\nI'll assume you actually want the count for each group in the projection.
\nfrom p in CRM.tProducts\n join oi in CRM.tOrderItems on p.prodID equals oi.prodID\n join o in CRM.tOrders on oi.orderID equals o.orderID\nwhere o.status > 1 && p.active == true\ngroup p by p.Name into nameGroup\norderby nameGroup.Count()\nselect new { Name = nameGroup.Key, Count = nameGroup.Count() };\n
\n
soup wrap:
You need to execute a group by if you want the count
SELECT P.Name
FROM Product P
INNER JOIN OrderItems OI ON P.productID = OI.productID
INNER JOIN Orders O ON OI.orderID = O.orderId
WHERE P.Active = 1 AND O.Status > 2
GROUP BY P.Name
ORDER BY count(*) DESC
I'll assume you actually want the count for each group in the projection.
from p in CRM.tProducts
join oi in CRM.tOrderItems on p.prodID equals oi.prodID
join o in CRM.tOrders on oi.orderID equals o.orderID
where o.status > 1 && p.active == true
group p by p.Name into nameGroup
orderby nameGroup.Count()
select new { Name = nameGroup.Key, Count = nameGroup.Count() };
qid & accept id:
(8384688, 8384704)
query:
Concat two table columns and update one with result
soup:
Try this (for MySQL)
\nUPDATE your_table\nSET col1 = CONCAT_WS('.', col1, col2)\n
\nand this for MS-SQL
\nUPDATE your_table\nSET col1 =col1 || "." || col2\n
\n
soup wrap:
Try this (for MySQL)
UPDATE your_table
SET col1 = CONCAT_WS('.', col1, col2)
and this for MS-SQL
UPDATE your_table
SET col1 =col1 || "." || col2
qid & accept id:
(8423506, 8423824)
query:
T-SQL Dynamically execute stored procedure
soup:
Quite simple
\nCREATE PROCEDURE [logging] \n @PROCID int,,\n @MESSAGE VARCHAR(MAX)\n-- allows resolution of @PROCID in some circumstances\n-- eg nested calls, no direct permission on inner proc\nWITH EXECUTE AS OWNER\nAS\nBEGIN\n -- you are using schemas, right?\n PRINT OBJECT_SCHEMA_NAME(@PROCID) + '.' + OBJECT_NAME(@PROCID);\n PRINT @MESSAGE\nEND;\nGO\n
\nThen
\nexecute logging @@PROCID, N'log_message';\n
\nMSDN on OBJECT_SCHEMA_NAME and @@PROCID
\nEdit:
\nBeware of logging into tables during transactions. On rollback, you'll lose the log data
\n
soup wrap:
Quite simple
CREATE PROCEDURE [logging]
@PROCID int,,
@MESSAGE VARCHAR(MAX)
-- allows resolution of @PROCID in some circumstances
-- eg nested calls, no direct permission on inner proc
WITH EXECUTE AS OWNER
AS
BEGIN
-- you are using schemas, right?
PRINT OBJECT_SCHEMA_NAME(@PROCID) + '.' + OBJECT_NAME(@PROCID);
PRINT @MESSAGE
END;
GO
Then
execute logging @@PROCID, N'log_message';
MSDN on OBJECT_SCHEMA_NAME and @@PROCID
Edit:
Beware of logging into tables during transactions. On rollback, you'll lose the log data
qid & accept id:
(8451219, 8456920)
query:
How do I copy or import Oracle schemas between two different databases on different servers?
soup:
Similarly, if you're using Oracle 10g+, you should be able to make this work with Data Pump:
\nexpdp user1/pass1@db1 directory=dp_out schemas=user1 dumpfile=user1.dmp logfile=user1.log\n
\nAnd to import:
\nimpdp user2/pass2@db2 directory=dp_out remap_schema=user1:user2 dumpfile=user1.dmp logfile=user2.log\n
\n
soup wrap:
Similarly, if you're using Oracle 10g+, you should be able to make this work with Data Pump:
expdp user1/pass1@db1 directory=dp_out schemas=user1 dumpfile=user1.dmp logfile=user1.log
And to import:
impdp user2/pass2@db2 directory=dp_out remap_schema=user1:user2 dumpfile=user1.dmp logfile=user2.log
qid & accept id:
(8451558, 8451820)
query:
Remove a decimal from many fields
soup:
It sounds like you just need a simple REPLACE
\nSQL> with x as (\n 2 select '123E4.00' str from dual\n 3 union all\n 4 select '123K5.00' from dual\n 5 union all\n 6 select '123K123' from dual\n 7 )\n 8 select replace( str, '.' )\n 9 from x;\n\nREPLACE(\n--------\n123E400\n123K500\n123K123\n
\nYou'd need to turn that into an UPDATE statement against your table
\nUPDATE some_table\n SET some_column = REPLACE( some_column, '.' )\n WHERE some_column != REPLACE( some_column, '.' )\n
\n
soup wrap:
It sounds like you just need a simple REPLACE
SQL> with x as (
2 select '123E4.00' str from dual
3 union all
4 select '123K5.00' from dual
5 union all
6 select '123K123' from dual
7 )
8 select replace( str, '.' )
9 from x;
REPLACE(
--------
123E400
123K500
123K123
You'd need to turn that into an UPDATE statement against your table
UPDATE some_table
SET some_column = REPLACE( some_column, '.' )
WHERE some_column != REPLACE( some_column, '.' )
qid & accept id:
(8524475, 8527922)
query:
Join tables by suitable period
soup:
SET search_path='tmp';\n\nDROP TABLE items CASCADE;\nCREATE TABLE items\n ( item_id INTEGER NOT NULL PRIMARY KEY\n , item VARCHAR\n , save_date date NOT NULL\n );\nINSERT INTO items(item_id,item,save_date) VALUES\n ( 1, 'car', '2011-12-01' )\n,( 2, 'wheel', '2011-12-10' )\n,( 3, 'screen', '2011-12-11' )\n,( 4, 'table', '2011-12-15' )\n ;\n\nDROP TABLE periods CASCADE;\nCREATE TABLE periods\n ( period_id INTEGER NOT NULL PRIMARY KEY\n , period_name VARCHAR\n , start_date date NOT NULL\n );\nINSERT INTO periods(period_id,period_name,start_date) VALUES\n ( 1, 'period1', '2011-12-05' )\n,( 2, 'period2', '2011-12-09' )\n,( 3, 'period3', '2011-12-12' )\n ;\n-- self-join to find the next interval\nWITH pe AS (\n SELECT p0.period_id,p0.period_name,p0.start_date\n , p1.start_date AS end_date\n FROM periods p0\n -- must be a left join; because the most recent interval is still open\n -- (has no successor)\n LEFT JOIN periods p1 ON p1.start_date > p0.start_date\n WHERE NOT EXISTS (\n SELECT * FROM periods px\n WHERE px.start_date > p0.start_date\n AND px.start_date < p1.start_date\n )\n )\nSELECT it.item_id\n , it.item\n , it.save_date\n , pe.period_id\n , pe.period_name\n , pe.start_date\n , pe.end_date\nFROM items it\nLEFT JOIN pe\n ON it.save_date >= pe.start_date\n AND ( it.save_date < pe.end_date OR pe.end_date IS NULL)\n ;\n
\nThe result:
\n item_id | item | save_date | period_id | period_name | start_date | end_date\n---------+--------+------------+-----------+-------------+------------+------------\n 1 | car | 2011-12-01 | | | |\n 2 | wheel | 2011-12-10 | 2 | period2 | 2011-12-09 | 2011-12-12\n 3 | screen | 2011-12-11 | 2 | period2 | 2011-12-09 | 2011-12-12\n 4 | table | 2011-12-15 | 3 | period3 | 2011-12-12 |\n(4 rows)\n
\n
soup wrap:
SET search_path='tmp';
DROP TABLE items CASCADE;
CREATE TABLE items
( item_id INTEGER NOT NULL PRIMARY KEY
, item VARCHAR
, save_date date NOT NULL
);
INSERT INTO items(item_id,item,save_date) VALUES
( 1, 'car', '2011-12-01' )
,( 2, 'wheel', '2011-12-10' )
,( 3, 'screen', '2011-12-11' )
,( 4, 'table', '2011-12-15' )
;
DROP TABLE periods CASCADE;
CREATE TABLE periods
( period_id INTEGER NOT NULL PRIMARY KEY
, period_name VARCHAR
, start_date date NOT NULL
);
INSERT INTO periods(period_id,period_name,start_date) VALUES
( 1, 'period1', '2011-12-05' )
,( 2, 'period2', '2011-12-09' )
,( 3, 'period3', '2011-12-12' )
;
-- self-join to find the next interval
WITH pe AS (
SELECT p0.period_id,p0.period_name,p0.start_date
, p1.start_date AS end_date
FROM periods p0
-- must be a left join; because the most recent interval is still open
-- (has no successor)
LEFT JOIN periods p1 ON p1.start_date > p0.start_date
WHERE NOT EXISTS (
SELECT * FROM periods px
WHERE px.start_date > p0.start_date
AND px.start_date < p1.start_date
)
)
SELECT it.item_id
, it.item
, it.save_date
, pe.period_id
, pe.period_name
, pe.start_date
, pe.end_date
FROM items it
LEFT JOIN pe
ON it.save_date >= pe.start_date
AND ( it.save_date < pe.end_date OR pe.end_date IS NULL)
;
The result:
item_id | item | save_date | period_id | period_name | start_date | end_date
---------+--------+------------+-----------+-------------+------------+------------
1 | car | 2011-12-01 | | | |
2 | wheel | 2011-12-10 | 2 | period2 | 2011-12-09 | 2011-12-12
3 | screen | 2011-12-11 | 2 | period2 | 2011-12-09 | 2011-12-12
4 | table | 2011-12-15 | 3 | period3 | 2011-12-12 |
(4 rows)
qid & accept id:
(8527731, 8527765)
query:
sql subquery group by
soup:
SELECT REF, UserName, TransDate\nFROM dbo.MyTable \nWHERE ID = (\n SELECT TOP 1 ID\n FROM dbo.MyTable\n WHERE Status = 1 AND REF = 1001\n ORDER BY TransDate ASC\n)\n
\nEDIT:
\nOr, if you need the results for each REF, instead of a specific REF, you can try this:
\nSELECT mt.REF, mt.UserName, mt.TransDate\nFROM \n dbo.MyTable mt JOIN (\n SELECT\n REF,\n MIN(TransDate) AS MinTransDate\n FROM dbo.MyTable\n WHERE Status = 1\n GROUP BY REF\n ) MinResult mr ON mr.REF = mt.REF AND mr.MinTransDate = mt.TransDate\n
\n
soup wrap:
SELECT REF, UserName, TransDate
FROM dbo.MyTable
WHERE ID = (
SELECT TOP 1 ID
FROM dbo.MyTable
WHERE Status = 1 AND REF = 1001
ORDER BY TransDate ASC
)
EDIT:
Or, if you need the results for each REF, instead of a specific REF, you can try this:
SELECT mt.REF, mt.UserName, mt.TransDate
FROM
dbo.MyTable mt JOIN (
SELECT
REF,
MIN(TransDate) AS MinTransDate
FROM dbo.MyTable
WHERE Status = 1
GROUP BY REF
) MinResult mr ON mr.REF = mt.REF AND mr.MinTransDate = mt.TransDate
qid & accept id:
(8546198, 8546321)
query:
Selecting using two column names, using the other one if one is known of each record
soup:
You can use a case statement in your join condition, something like this:
\nSELECT * FROM games g\n JOIN accounts a \n ON a.id = case g.userid1 when ? then g.userid2 else g.userid1 end\nWHERE \n g.userid1 = ? OR g.userid2 = ?\n
\nHowever, depending on your indexes, it may be quicker to use a union, eg.
\n SELECT * FROM games g\n JOIN accounts a ON a.id = case g.userid2\n WHERE g.userid1 = ?\nUNION ALL\n SELECT * FROM games g\n JOIN accounts a ON a.id = case g.userid1\n WHERE g.userid2 = ?\n
\nAn alternative query using OR,
\nSELECT * FROM games g, accounts a \nWHERE \n (g.userid1 = ? AND g.userid2 = a.id) \n OR (g.userid2 = ? AND g.userid1 = a.id)\n
\n
soup wrap:
You can use a case statement in your join condition, something like this:
SELECT * FROM games g
JOIN accounts a
ON a.id = case g.userid1 when ? then g.userid2 else g.userid1 end
WHERE
g.userid1 = ? OR g.userid2 = ?
However, depending on your indexes, it may be quicker to use a union, eg.
SELECT * FROM games g
JOIN accounts a ON a.id = case g.userid2
WHERE g.userid1 = ?
UNION ALL
SELECT * FROM games g
JOIN accounts a ON a.id = case g.userid1
WHERE g.userid2 = ?
An alternative query using OR,
SELECT * FROM games g, accounts a
WHERE
(g.userid1 = ? AND g.userid2 = a.id)
OR (g.userid2 = ? AND g.userid1 = a.id)
qid & accept id:
(8578252, 8578411)
query:
change sql parameter to date decimal
soup:
Try something like this.
\n select CAST(replace(convert(varchar, getdate(), 101), '/', '') AS DECIMAL)\n
\nOr something like this where @normaldate is the search date.
\nSELECT decimaldate FROM TABLE1 WHERE decimaldate = CAST(replace(convert(varchar, @normaldate, 101), '/', '') AS DECIMAL)\n
\n
soup wrap:
Try something like this.
select CAST(replace(convert(varchar, getdate(), 101), '/', '') AS DECIMAL)
Or something like this where @normaldate is the search date.
SELECT decimaldate FROM TABLE1 WHERE decimaldate = CAST(replace(convert(varchar, @normaldate, 101), '/', '') AS DECIMAL)
qid & accept id:
(8610517, 9038725)
query:
Trying to replace dbms_xmlgen.xmlget with sys_xmlagg
soup:
I don't have access to an Oracle DB at the moment, so please forgive inaccuracies.
\nThe parameterization of the DBMS_XMLGEN call seems to be the goal. This is accomplished by using a little PL/SQL. The Oracle Docs for the DBMS_XMLGEN package describe a few operations which should help. First, create a context from a SYS_REFCURSOR using this form:
\nDBMS_XMLGEN.NEWCONTEXT (\n queryString IN SYS_REFCURSOR)\nRETURN ctxHandle;\n
\nThen, use the context in another form of GetXML:
\nDBMS_XMLGEN.GETXML (\n ctx IN ctxHandle, \n tmpclob IN OUT NCOPY CLOB,\n dtdOrSchema IN number := NONE)\nRETURN BOOLEAN;\n
\nUsing this method also gives the benefit of potentially reusing the CLOB (not making a new temporary one), which may help with performance. There is another form which is more like the one you were using in your example, but loses this property.
\nOne more thing... The return of GETXML in this example should tell you whether there were rows returned or not. This should be more reliable than checking the contents of the CLOB when the operation completes. Alternately, you can use the NumRowsProcessed function on the context to get the count of the rows included in the CLOB.
\nRoughly, your code would look something like this:
\nDECLARE\n srcRefCursor SYS_REFCURSOR;\n ctxHandle ctxHandle;\n somevalue VARCHAR2(1000);\n myClob CLOB;\n hasRows boolean;\nBEGIN\n OPEN srcRefCursor FOR\n SELECT c1, c2 \n FROM t1 \n WHERE c1 = somevalue; --Note parameterized value\n\n ctxHandle := DBMS_XMLGEN.NEWCONTEXT(srcRefCursor);\n\n hasRows := DBMS_XMLGEN.GETXML(\n ctxHandle,\n myClob -- XML stored in myCLOB\n );\n\n IF (hasRows) THEN\n /* Do work on CLOB here */\n END IF;\n\n\n DBMS_XMLGEN.CLOSECONTEXT(ctxHandle);\nEND;\n
\n
soup wrap:
I don't have access to an Oracle DB at the moment, so please forgive inaccuracies.
The parameterization of the DBMS_XMLGEN call seems to be the goal. This is accomplished by using a little PL/SQL. The Oracle Docs for the DBMS_XMLGEN package describe a few operations which should help. First, create a context from a SYS_REFCURSOR using this form:
DBMS_XMLGEN.NEWCONTEXT (
queryString IN SYS_REFCURSOR)
RETURN ctxHandle;
Then, use the context in another form of GetXML:
DBMS_XMLGEN.GETXML (
ctx IN ctxHandle,
tmpclob IN OUT NCOPY CLOB,
dtdOrSchema IN number := NONE)
RETURN BOOLEAN;
Using this method also gives the benefit of potentially reusing the CLOB (not making a new temporary one), which may help with performance. There is another form which is more like the one you were using in your example, but loses this property.
One more thing... The return of GETXML in this example should tell you whether there were rows returned or not. This should be more reliable than checking the contents of the CLOB when the operation completes. Alternately, you can use the NumRowsProcessed function on the context to get the count of the rows included in the CLOB.
Roughly, your code would look something like this:
DECLARE
srcRefCursor SYS_REFCURSOR;
ctxHandle ctxHandle;
somevalue VARCHAR2(1000);
myClob CLOB;
hasRows boolean;
BEGIN
OPEN srcRefCursor FOR
SELECT c1, c2
FROM t1
WHERE c1 = somevalue; --Note parameterized value
ctxHandle := DBMS_XMLGEN.NEWCONTEXT(srcRefCursor);
hasRows := DBMS_XMLGEN.GETXML(
ctxHandle,
myClob -- XML stored in myCLOB
);
IF (hasRows) THEN
/* Do work on CLOB here */
END IF;
DBMS_XMLGEN.CLOSECONTEXT(ctxHandle);
END;
qid & accept id:
(8629046, 8629140)
query:
How to avoid the null values
soup:
Unless you explain in more detail how those values from Value1 and Value2 columns belong together, and only if that "matching" is really deterministic, then you could do something like this:
\nDECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))\n\nINSERT INTO @temp\n (ID, Value1, Value2)\nVALUES\n (1, 'Rajan', NULL),\n (3, 'Vijayan', NULL),\n (1, NULL, 'Ravi'),\n (3, NULL, 'sudeep'),\n (2, 'kumar', NULL),\n (2, NULL, 'venkat')\n\nSELECT DISTINCT\n ID, \n (SELECT Value1 FROM @temp t2 WHERE t2.ID = t.ID AND Value1 IS NOT NULL) AS 'Value1',\n (SELECT Value2 FROM @temp t2 WHERE t2.ID = t.ID AND Value2 IS NOT NULL) AS 'Value2'\nFROM\n @temp t\n
\nThat would give you one row for each value of ID, with the non-NULL value for Value1 and the non-null value for Value2.
\nBut as your question stands right now, this approach doesn't work, since you have multiple entries for the same ID - and no explanation as to how to match the two separate values together....
\nSo as it stands right now, I would say there is no deterministic and proper solution for your question. You need to provide more information so we can find a solution for you.
\nUpdate: if you would update to SQL Server 2005 or newer, you could do something like two nested CTE's - but in that case, too, you would have to define some rule / ordering as to how the two variants with ID = 001 are joined together.....
\nSomething like:
\nDECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))\n\nINSERT INTO @temp\n (ID, Value1, Value2)\nVALUES\n (1, 'Rajan', NULL),\n (1, 'Vijayan', NULL),\n (1, NULL, 'Ravi'),\n (1, NULL, 'sudeep'),\n (2, 'kumar', NULL),\n (2, NULL, 'venkat')\n\n;WITH Value1CTE AS\n(\n SELECT ID, Value1,\n ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value1) AS 'RowNum'\n FROM @temp\n WHERE Value1 IS NOT NULL\n),\nValue2CTE AS\n(\n SELECT ID, Value2,\n ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value2) AS 'RowNum'\n FROM @temp\n WHERE Value2 IS NOT NULL\n)\nSELECT \n v1.ID, \n v1.Value1, v2.Value2\nFROM\n Value1CTE v1\nINNER JOIN \n Value2CTE v2 ON v1.ID = v2.ID AND v1.RowNum = v2.RowNum\n
\nwould give you a reproducible output of:
\nID Value1 Value2\n1 Rajan Ravi\n1 Vijayan sudeep\n2 kumar venkat\n
\nThis is under the assumption that given two entries with the SAME ID, you want to sort (ORDER BY) the actual values (e.g. Rajan before Vijayan and Ravi before sudeep --> there you'd join Rajan and Ravi together, as well as Vijayan and sudeep).
\nBut again: this is in SQL Server 2005 and newer only - no equivalent in SQL Server 2000, unforutnately.....
\n
soup wrap:
Unless you explain in more detail how those values from Value1 and Value2 columns belong together, and only if that "matching" is really deterministic, then you could do something like this:
DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))
INSERT INTO @temp
(ID, Value1, Value2)
VALUES
(1, 'Rajan', NULL),
(3, 'Vijayan', NULL),
(1, NULL, 'Ravi'),
(3, NULL, 'sudeep'),
(2, 'kumar', NULL),
(2, NULL, 'venkat')
SELECT DISTINCT
ID,
(SELECT Value1 FROM @temp t2 WHERE t2.ID = t.ID AND Value1 IS NOT NULL) AS 'Value1',
(SELECT Value2 FROM @temp t2 WHERE t2.ID = t.ID AND Value2 IS NOT NULL) AS 'Value2'
FROM
@temp t
That would give you one row for each value of ID, with the non-NULL value for Value1 and the non-null value for Value2.
But as your question stands right now, this approach doesn't work, since you have multiple entries for the same ID - and no explanation as to how to match the two separate values together....
So as it stands right now, I would say there is no deterministic and proper solution for your question. You need to provide more information so we can find a solution for you.
Update: if you would update to SQL Server 2005 or newer, you could do something like two nested CTE's - but in that case, too, you would have to define some rule / ordering as to how the two variants with ID = 001 are joined together.....
Something like:
DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))
INSERT INTO @temp
(ID, Value1, Value2)
VALUES
(1, 'Rajan', NULL),
(1, 'Vijayan', NULL),
(1, NULL, 'Ravi'),
(1, NULL, 'sudeep'),
(2, 'kumar', NULL),
(2, NULL, 'venkat')
;WITH Value1CTE AS
(
SELECT ID, Value1,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value1) AS 'RowNum'
FROM @temp
WHERE Value1 IS NOT NULL
),
Value2CTE AS
(
SELECT ID, Value2,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value2) AS 'RowNum'
FROM @temp
WHERE Value2 IS NOT NULL
)
SELECT
v1.ID,
v1.Value1, v2.Value2
FROM
Value1CTE v1
INNER JOIN
Value2CTE v2 ON v1.ID = v2.ID AND v1.RowNum = v2.RowNum
would give you a reproducible output of:
ID Value1 Value2
1 Rajan Ravi
1 Vijayan sudeep
2 kumar venkat
This is under the assumption that given two entries with the SAME ID, you want to sort (ORDER BY) the actual values (e.g. Rajan before Vijayan and Ravi before sudeep --> there you'd join Rajan and Ravi together, as well as Vijayan and sudeep).
But again: this is in SQL Server 2005 and newer only - no equivalent in SQL Server 2000, unforutnately.....
qid & accept id:
(8636956, 8644844)
query:
How to join two tables with one of them not having a primary key and not the same character length
soup:
Try this to compare the first 8 characters only:
\nSELECT r.domainid, r.dombegin, r.domend, d.ddid \nFROM domainregion r\nJOIN dyndomrun d ON r.domainid::varchar(8) = d.ddid \nORDER BY r.domainid, d.ddid, r.dombegin, r.domend;\n
\nThe cast implicitly trims trailing characters. ddid only has 8 characters to begin with. No need to process it, too. This achieves the same:
\nJOIN dyndomrun d ON left(r.domainid, 8) = d.ddid \n
\nHowever, be advised that the string function left() was only introduced with PostgreSQL 9.1. In earlier versions you can substitute:
\nJOIN dyndomrun d ON substr(r.domainid, 1, 8) = d.ddid\n
\n__
\nBasic explanation for beginners:
\n\nThe query uses a JOIN. Read more about that in the manual.
\nFROM domainregion r is short for FROM domainregion AS r. AS is just noise in this case in PostgreSQL. The table alias makes the query shorter and easier to read but has no other impact in here. You can also use table aliases to include the same table multiple times for instance.
\nThe join condition ON r.domainid::varchar(8) = d.ddid joins only those rows together where the two expressions match exactly. Again, read about those basics in the manual (or any other source).
\n
\nIt's a simple query, not much to explain here.
\n
soup wrap:
Try this to compare the first 8 characters only:
SELECT r.domainid, r.dombegin, r.domend, d.ddid
FROM domainregion r
JOIN dyndomrun d ON r.domainid::varchar(8) = d.ddid
ORDER BY r.domainid, d.ddid, r.dombegin, r.domend;
The cast implicitly trims trailing characters. ddid only has 8 characters to begin with. No need to process it, too. This achieves the same:
JOIN dyndomrun d ON left(r.domainid, 8) = d.ddid
However, be advised that the string function left() was only introduced with PostgreSQL 9.1. In earlier versions you can substitute:
JOIN dyndomrun d ON substr(r.domainid, 1, 8) = d.ddid
__
Basic explanation for beginners:
The query uses a JOIN. Read more about that in the manual.
FROM domainregion r is short for FROM domainregion AS r. AS is just noise in this case in PostgreSQL. The table alias makes the query shorter and easier to read but has no other impact in here. You can also use table aliases to include the same table multiple times for instance.
The join condition ON r.domainid::varchar(8) = d.ddid joins only those rows together where the two expressions match exactly. Again, read about those basics in the manual (or any other source).
It's a simple query, not much to explain here.
qid & accept id:
(8645254, 8645279)
query:
Find rows with same ID and have a particular set of names
soup:
The simplest way is to compare a COUNT per ID with the number of elements in your list:
\nSELECT\n ID\nFROM\n MyTable\nWHERE\n NAME IN ('A', 'B', 'C')\nGROUP BY\n ID\nHAVING\n COUNT(*) = 3;\n
\nNote: ORDER BY isn't needed and goes after the HAVING if needed
\nEdit, with question update. In MySQL, it's easier to use a separate table for search terms
\nDROP TABLE IF EXISTS gbn;\nCREATE TABLE gbn (ID INT, `name` VARCHAR(100), REV INT);\nINSERT gbn VALUES (1, 'A', 0);\nINSERT gbn VALUES (1, 'B', 0);\nINSERT gbn VALUES (1, 'C', 0);\nINSERT gbn VALUES (2, 'A', 1);\nINSERT gbn VALUES (2, 'B', 0);\nINSERT gbn VALUES (2, 'C', 0);\nINSERT gbn VALUES (3, 'A', 0);\nINSERT gbn VALUES (3, 'B', 0);\n\nDROP TABLE IF EXISTS gbn1;\nCREATE TABLE gbn1 ( `name` VARCHAR(100));\nINSERT gbn1 VALUES ('A');\nINSERT gbn1 VALUES ('B');\n\nSELECT\n gbn.ID\nFROM\n gbn\n LEFT JOIN\n gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n gbn.ID\nHAVING\n COUNT(*) = (SELECT COUNT(*) FROM gbn1)\n AND MIN(gbn.REV) = MAX(gbn.REV);\n\nINSERT gbn1 VALUES ('C');\n\nSELECT\n gbn.ID\nFROM\n gbn\n LEFT JOIN\n gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n gbn.ID\nHAVING\n COUNT(*) = (SELECT COUNT(*) FROM gbn1)\n AND MIN(gbn.REV) = MAX(gbn.REV);\n
\nEdit 2, without extra table, use a derived (inline) table:
\nSELECT\n gbn.ID\nFROM\n gbn\n LEFT JOIN\n (SELECT 'A' AS `name`\n UNION ALL SELECT 'B' \n UNION ALL SELECT 'C'\n ) gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n gbn.ID\nHAVING\n COUNT(*) = 3 -- matches number of elements in gbn1 derived table\n AND MIN(gbn.REV) = MAX(gbn.REV);\n
\n
soup wrap:
The simplest way is to compare a COUNT per ID with the number of elements in your list:
SELECT
ID
FROM
MyTable
WHERE
NAME IN ('A', 'B', 'C')
GROUP BY
ID
HAVING
COUNT(*) = 3;
Note: ORDER BY isn't needed and goes after the HAVING if needed
Edit, with question update. In MySQL, it's easier to use a separate table for search terms
DROP TABLE IF EXISTS gbn;
CREATE TABLE gbn (ID INT, `name` VARCHAR(100), REV INT);
INSERT gbn VALUES (1, 'A', 0);
INSERT gbn VALUES (1, 'B', 0);
INSERT gbn VALUES (1, 'C', 0);
INSERT gbn VALUES (2, 'A', 1);
INSERT gbn VALUES (2, 'B', 0);
INSERT gbn VALUES (2, 'C', 0);
INSERT gbn VALUES (3, 'A', 0);
INSERT gbn VALUES (3, 'B', 0);
DROP TABLE IF EXISTS gbn1;
CREATE TABLE gbn1 ( `name` VARCHAR(100));
INSERT gbn1 VALUES ('A');
INSERT gbn1 VALUES ('B');
SELECT
gbn.ID
FROM
gbn
LEFT JOIN
gbn1 ON gbn.`name` = gbn1.`name`
GROUP BY
gbn.ID
HAVING
COUNT(*) = (SELECT COUNT(*) FROM gbn1)
AND MIN(gbn.REV) = MAX(gbn.REV);
INSERT gbn1 VALUES ('C');
SELECT
gbn.ID
FROM
gbn
LEFT JOIN
gbn1 ON gbn.`name` = gbn1.`name`
GROUP BY
gbn.ID
HAVING
COUNT(*) = (SELECT COUNT(*) FROM gbn1)
AND MIN(gbn.REV) = MAX(gbn.REV);
Edit 2, without extra table, use a derived (inline) table:
SELECT
gbn.ID
FROM
gbn
LEFT JOIN
(SELECT 'A' AS `name`
UNION ALL SELECT 'B'
UNION ALL SELECT 'C'
) gbn1 ON gbn.`name` = gbn1.`name`
GROUP BY
gbn.ID
HAVING
COUNT(*) = 3 -- matches number of elements in gbn1 derived table
AND MIN(gbn.REV) = MAX(gbn.REV);
qid & accept id:
(8647675, 8649305)
query:
List category/subcategory tree and display its sub-categories in the same row
soup:
When we used to make these concatenated lists in the database we took a similar approach to what you are doing at first
\nthen when we looked for speed
\nwe made them into CLR functions\nhttp://msdn.microsoft.com/en-US/library/a8s4s5dz(v=VS.90).aspx\n
\nand now our database is only responsible for storing and retrieving data
\nthis sort of thing will be in our data layer in the application\n
\n
soup wrap:
When we used to make these concatenated lists in the database we took a similar approach to what you are doing at first
then when we looked for speed
we made them into CLR functions
http://msdn.microsoft.com/en-US/library/a8s4s5dz(v=VS.90).aspx
and now our database is only responsible for storing and retrieving data
this sort of thing will be in our data layer in the application
qid & accept id:
(8669703, 8670279)
query:
How do I combine result sets from two stored procedure calls?
soup:
This may be oversimplifying the problem, but if you have control over the sp, just use in rather than =:
\nCREATE PROCEDURE [dbo].[MyStored]\nAS\n SELECT blahblahblah WHERE StoredState IN (0,1) LotsOfJoinsFollow;\nRETURN 0\n
\nIf this is not an option, just push the results of both sproc calls into a temp table:
\n/*Create a table with the same columns that the sproc returns*/\nCREATE TABLE #tempblahblah(blahblahblah NVARCHAR(50))\n\nINSERT #tempblahblah ( blahblahblah )\n EXEC MyStored 0\n\nINSERT #tempblahblah ( blahblahblah )\n EXEC MyStored 1\n\nSELECT * FROM #tempblahblah
\n
soup wrap:
This may be oversimplifying the problem, but if you have control over the sp, just use in rather than =:
CREATE PROCEDURE [dbo].[MyStored]
AS
SELECT blahblahblah WHERE StoredState IN (0,1) LotsOfJoinsFollow;
RETURN 0
If this is not an option, just push the results of both sproc calls into a temp table:
/*Create a table with the same columns that the sproc returns*/
CREATE TABLE #tempblahblah(blahblahblah NVARCHAR(50))
INSERT #tempblahblah ( blahblahblah )
EXEC MyStored 0
INSERT #tempblahblah ( blahblahblah )
EXEC MyStored 1
SELECT * FROM #tempblahblah
qid & accept id:
(8684054, 8684257)
query:
T-SQL how to get date range for 2 week pay period
soup:
You need some modulo operations and DATEDIFF.
\ndeclare @periodStart datetime\ndeclare @periodEnd datetime\n\nset @periodStart = CAST('2011-12-03' as datetime)\nset @periodEnd = CAST('2011-12-16' as datetime)\n\ndeclare @anyDate datetime\nset @anyDate = CAST('2011-12-30' as datetime)\n\ndeclare @periodLength int\nset @periodLength = DATEDIFF(day, @periodStart, @periodEnd) + 1\n\n\ndeclare @daysFromFirstPeriod int\nset @daysFromFirstPeriod = DATEDIFF(day, @periodStart, @anyDate)\ndeclare @daysIntoPeriod int\nset @daysIntoPeriod = @daysFromFirstPeriod % @periodLength\n\nselect @periodLength as periodLength, @daysFromFirstPeriod as daysFromFirstPeriod, @daysIntoPeriod as daysIntoPeriod\nselect DATEADD(day, -@daysIntoPeriod, @anyDate) as currentPeriodStart, DATEADD(day, @periodLength -@daysIntoPeriod, @anyDate) as currentPeriodEnd\n
\nGives output
\nperiodLength daysFromFirstPeriod daysIntoPeriod\n14 27 13\n
\nand
\ncurrentPeriodStart currentPeriodEnd\n2011-12-17 00:00:00.000 2011-12-31 00:00:00.000\n
\n
soup wrap:
You need some modulo operations and DATEDIFF.
declare @periodStart datetime
declare @periodEnd datetime
set @periodStart = CAST('2011-12-03' as datetime)
set @periodEnd = CAST('2011-12-16' as datetime)
declare @anyDate datetime
set @anyDate = CAST('2011-12-30' as datetime)
declare @periodLength int
set @periodLength = DATEDIFF(day, @periodStart, @periodEnd) + 1
declare @daysFromFirstPeriod int
set @daysFromFirstPeriod = DATEDIFF(day, @periodStart, @anyDate)
declare @daysIntoPeriod int
set @daysIntoPeriod = @daysFromFirstPeriod % @periodLength
select @periodLength as periodLength, @daysFromFirstPeriod as daysFromFirstPeriod, @daysIntoPeriod as daysIntoPeriod
select DATEADD(day, -@daysIntoPeriod, @anyDate) as currentPeriodStart, DATEADD(day, @periodLength -@daysIntoPeriod, @anyDate) as currentPeriodEnd
Gives output
periodLength daysFromFirstPeriod daysIntoPeriod
14 27 13
and
currentPeriodStart currentPeriodEnd
2011-12-17 00:00:00.000 2011-12-31 00:00:00.000
qid & accept id:
(8711054, 8711080)
query:
Trying to get rid of comma at end of a column
soup:
You can use substring.
\nHere is an example:
\ndeclare @test varchar(5)\nselect @test = '12,'\n\nselect substring(@test, 1, len(@test)-1)\n
\n
\nIn your case it would be:
\nUPDATE [Database].[schema].[Table]\nSET substring([Columnx], 1, len([Columnx])-1)\nWHERE [Columnx] like '%,'\nAND len([Columnx]) > 0\n
\n
soup wrap:
You can use substring.
Here is an example:
declare @test varchar(5)
select @test = '12,'
select substring(@test, 1, len(@test)-1)
In your case it would be:
UPDATE [Database].[schema].[Table]
SET substring([Columnx], 1, len([Columnx])-1)
WHERE [Columnx] like '%,'
AND len([Columnx]) > 0
qid & accept id:
(8718458, 8718594)
query:
view all data for duplicate rows in oracle
soup:
You can always use the GROUP BY/ HAVING query in an IN clause. This works and is relatively straightforward but it may not be particularly efficient if the number of duplicate rows is relatively large.
\nSELECT *\n FROM table1\n WHERE (name, type_id) IN (SELECT name, type_id\n FROM table1\n GROUP BY name, type_id\n HAVING COUNT(*) > 1)\n
\nIt would generally be more efficient to use analytic functions in order to avoid hitting the table a second time.
\nSELECT *\n FROM (SELECT id, \n name,\n type_id,\n code,\n lat,\n long,\n count(*) over (partition by name, type_id) cnt\n FROM table1)\n WHERE cnt > 1\n
\nDepending on what you are planning to do with the data and how many duplicates of a particular row there might be, you also might want to join table1 to itself to get the data in a single row
\nSELECT a.name,\n a.type_id,\n a.id,\n b.id,\n a.code,\n b.code,\n a.lat,\n b.lat,\n a.long,\n b.long\n FROM table1 a\n JOIN table1 b ON (a.name = b.name AND\n a.type_id = b.type_id AND\n a.rowid > b.rowid)\n
\n
soup wrap:
You can always use the GROUP BY/ HAVING query in an IN clause. This works and is relatively straightforward but it may not be particularly efficient if the number of duplicate rows is relatively large.
SELECT *
FROM table1
WHERE (name, type_id) IN (SELECT name, type_id
FROM table1
GROUP BY name, type_id
HAVING COUNT(*) > 1)
It would generally be more efficient to use analytic functions in order to avoid hitting the table a second time.
SELECT *
FROM (SELECT id,
name,
type_id,
code,
lat,
long,
count(*) over (partition by name, type_id) cnt
FROM table1)
WHERE cnt > 1
Depending on what you are planning to do with the data and how many duplicates of a particular row there might be, you also might want to join table1 to itself to get the data in a single row
SELECT a.name,
a.type_id,
a.id,
b.id,
a.code,
b.code,
a.lat,
b.lat,
a.long,
b.long
FROM table1 a
JOIN table1 b ON (a.name = b.name AND
a.type_id = b.type_id AND
a.rowid > b.rowid)
qid & accept id:
(8806028, 8806289)
query:
How to do calculations with crosstab/pivot via case in sqlite?
soup:
You can always just do the sums again, like so:
\nSELECT \n shop_id,\n sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,\n sum(CASE WHEN product = 'Focus' THEN units END) as Focus,\n sum(CASE WHEN product = 'Puma' THEN units END) as Puma,\n sum(CASE WHEN product = 'Fiesta' THEN units END) / sum(CASE WHEN product = 'Focus' THEN units END) as Ratio\nFROM sales\nGROUP BY shop_id\n
\nOr, faster, you can wrap it up in a subquery, like this:
\nselect\n shop_id,\n Fiesta,\n Focus,\n Puma,\n Fiesta/Focus as Ratio\nfrom\n (\n SELECT \n shop_id,\n sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,\n sum(CASE WHEN product = 'Focus' THEN units END) as Focus,\n sum(CASE WHEN product = 'Puma' THEN units END) as Puma\n FROM sales\n GROUP BY shop_id\n ) x\n
\n
soup wrap:
You can always just do the sums again, like so:
SELECT
shop_id,
sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,
sum(CASE WHEN product = 'Focus' THEN units END) as Focus,
sum(CASE WHEN product = 'Puma' THEN units END) as Puma,
sum(CASE WHEN product = 'Fiesta' THEN units END) / sum(CASE WHEN product = 'Focus' THEN units END) as Ratio
FROM sales
GROUP BY shop_id
Or, faster, you can wrap it up in a subquery, like this:
select
shop_id,
Fiesta,
Focus,
Puma,
Fiesta/Focus as Ratio
from
(
SELECT
shop_id,
sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,
sum(CASE WHEN product = 'Focus' THEN units END) as Focus,
sum(CASE WHEN product = 'Puma' THEN units END) as Puma
FROM sales
GROUP BY shop_id
) x
qid & accept id:
(8847175, 8847300)
query:
how to select from table untill the total is a specific number?
soup:
You could make it easier for yourself by adding an extra column, containing the sum of the amounts with a lower ID.
\n"ID" "oamount" "mamount"\n'1' '1500' '0'\n'2' '2000' '1500'\n'3' '2000' '3500'\n'4' '1000' '5500'\n
\nYou can then select based on that new column:
\nSELECT `ID`,\n CASE WHEN `oamount` < @Amount - `mamount`\n THEN `oamount`\n ELSE @Amount - `mamount` END AS `amount`\nFROM `yourtable`\nWHERE `mamount` < @Amount\n
\nYou can do it without adding this extra column, but you'll be making things unnecessarily hard.
\n
soup wrap:
You could make it easier for yourself by adding an extra column, containing the sum of the amounts with a lower ID.
"ID" "oamount" "mamount"
'1' '1500' '0'
'2' '2000' '1500'
'3' '2000' '3500'
'4' '1000' '5500'
You can then select based on that new column:
SELECT `ID`,
CASE WHEN `oamount` < @Amount - `mamount`
THEN `oamount`
ELSE @Amount - `mamount` END AS `amount`
FROM `yourtable`
WHERE `mamount` < @Amount
You can do it without adding this extra column, but you'll be making things unnecessarily hard.
qid & accept id:
(8928978, 8929247)
query:
How can I use the LIKE operator on a list of strings to compare?
soup:
You could do something like this -
\nSELECT FIND_IN_SET(\n 'bigD',\n REPLACE(REPLACE('barfy,max,whiskers,champ,big-D,Big D,Sally', '-', ''), ' ', '')\n ) has_petname;\n+-------------+\n| has_petname |\n+-------------+\n| 5 |\n+-------------+\n
\nIt will give a non-zero value (>0) if there is a pet_name we are looking for.
\nBut I'd suggest you to create a table petnames and use SOUNDS LIKE function to compare names, in this case 'bigD' will be equal to 'big-D', e.g.:
\nSELECT 'bigD' SOUNDS LIKE 'big-D';\n+---------------------------+\n| 'bigD'SOUNDS LIKE 'big-D' |\n+---------------------------+\n| 1 |\n+---------------------------+\n
\nExample:
\nCREATE TABLE petnames(name VARCHAR(40));\nINSERT INTO petnames VALUES\n ('barfy'),('max'),('whiskers'),('champ'),('big-D'),('Big D'),('Sally');\n\nSELECT name FROM petnames WHERE 'bigD' SOUNDS LIKE name;\n+-------+\n| name |\n+-------+\n| big-D |\n| Big D |\n+-------+\n
\n
soup wrap:
You could do something like this -
SELECT FIND_IN_SET(
'bigD',
REPLACE(REPLACE('barfy,max,whiskers,champ,big-D,Big D,Sally', '-', ''), ' ', '')
) has_petname;
+-------------+
| has_petname |
+-------------+
| 5 |
+-------------+
It will give a non-zero value (>0) if there is a pet_name we are looking for.
But I'd suggest you to create a table petnames and use SOUNDS LIKE function to compare names, in this case 'bigD' will be equal to 'big-D', e.g.:
SELECT 'bigD' SOUNDS LIKE 'big-D';
+---------------------------+
| 'bigD'SOUNDS LIKE 'big-D' |
+---------------------------+
| 1 |
+---------------------------+
Example:
CREATE TABLE petnames(name VARCHAR(40));
INSERT INTO petnames VALUES
('barfy'),('max'),('whiskers'),('champ'),('big-D'),('Big D'),('Sally');
SELECT name FROM petnames WHERE 'bigD' SOUNDS LIKE name;
+-------+
| name |
+-------+
| big-D |
| Big D |
+-------+
qid & accept id:
(8939857, 8941609)
query:
Generating a series from a predefined date (PG)
soup:
Turns out, it can be even simpler. :)
\nSELECT generate_series(\n date_trunc('year', min(created_at))\n , now()\n , interval '1 month') AS month;\nFROM users;\n
\nMore about date_trunc in the manual.
\nOr, if you actually want the data type date instead of timestamp with time zone:
\nSELECT generate_series(\n date_trunc('year', min(created_at))\n , now()\n , interval '1 month')::date AS month;\nFROM users;\n
\n
soup wrap:
Turns out, it can be even simpler. :)
SELECT generate_series(
date_trunc('year', min(created_at))
, now()
, interval '1 month') AS month;
FROM users;
More about date_trunc in the manual.
Or, if you actually want the data type date instead of timestamp with time zone:
SELECT generate_series(
date_trunc('year', min(created_at))
, now()
, interval '1 month')::date AS month;
FROM users;
qid & accept id:
(9015870, 9016168)
query:
Find position of given PK and next and previous row as one result row
soup:
Try this one -
\n+row position
\nSELECT car_id, url, signup, CONCAT(pos1, '/', @p1) position FROM (\n SELECT\n c.*,\n @p1:=@p1+1 pos1,\n @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)\n FROM\n cars c,\n (SELECT @p1:=0, @p2:=NULL) t\n ORDER BY\n signup\n) t\nWHERE\n pos1 BETWEEN @p2 - 1 AND @p2 + 1\n
\n
\nYou wrote:\nthe desired result would be: pos, nextid, nexturl, previd, prevurl
\nTry this query:
\nSELECT\n @p2 pos,\n MAX(IF(pos1 > @p2, car_id, NULL)) nextid,\n MAX(IF(pos1 > @p2, url, NULL)) nexturl,\n MAX(IF(pos1 < @p2, car_id, NULL)) previd,\n MAX(IF(pos1 < @p2, url, NULL)) prevurl\nFROM (\n SELECT\n c.*,\n @p1:=@p1+1 pos1,\n @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)\n FROM\n cars c,\n (SELECT @p1:=0, @p2:=NULL) t\n ORDER BY\n signup\n) t\nWHERE\n pos1 BETWEEN @p2 - 1 AND @p2 + 1\n
\n
soup wrap:
Try this one -
+row position
SELECT car_id, url, signup, CONCAT(pos1, '/', @p1) position FROM (
SELECT
c.*,
@p1:=@p1+1 pos1,
@p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)
FROM
cars c,
(SELECT @p1:=0, @p2:=NULL) t
ORDER BY
signup
) t
WHERE
pos1 BETWEEN @p2 - 1 AND @p2 + 1
You wrote:
the desired result would be: pos, nextid, nexturl, previd, prevurl
Try this query:
SELECT
@p2 pos,
MAX(IF(pos1 > @p2, car_id, NULL)) nextid,
MAX(IF(pos1 > @p2, url, NULL)) nexturl,
MAX(IF(pos1 < @p2, car_id, NULL)) previd,
MAX(IF(pos1 < @p2, url, NULL)) prevurl
FROM (
SELECT
c.*,
@p1:=@p1+1 pos1,
@p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)
FROM
cars c,
(SELECT @p1:=0, @p2:=NULL) t
ORDER BY
signup
) t
WHERE
pos1 BETWEEN @p2 - 1 AND @p2 + 1
qid & accept id:
(9056169, 9056277)
query:
Ranges on multiple columns
soup:
If you're looking for the first range that contains at least a part of the block, try a condition like:
\nvala <= colb and cola <= valb\n
\nThis says the search range [vala,valb] must partially overlap with the target range [cola,colb].
\nIn SQL:
\nselect *\nfrom example\nwhere vala <= colb and cola <= valb\norder by\n cola -- Lowest network range\nlimit 1\n
\n
soup wrap:
If you're looking for the first range that contains at least a part of the block, try a condition like:
vala <= colb and cola <= valb
This says the search range [vala,valb] must partially overlap with the target range [cola,colb].
In SQL:
select *
from example
where vala <= colb and cola <= valb
order by
cola -- Lowest network range
limit 1
qid & accept id:
(9127317, 9127415)
query:
How to order by a column (which match a criteria) in SQL?
soup:
Use isnull, if UPDATE_DATE is null it uses CREATION_DATE to order rows.
\nselect * \nfrom table\norder by isnull(UPDATE_DATE, CREATION_DATE) asc\n
\nRead more about isnull on MSDN.
\ncoalesce is an alternative and it's going to work in most RDBMS (afaik).
\nselect * \nfrom table\norder by coalesce(UPDATE_DATE, CREATION_DATE) asc\n
\n
soup wrap:
Use isnull, if UPDATE_DATE is null it uses CREATION_DATE to order rows.
select *
from table
order by isnull(UPDATE_DATE, CREATION_DATE) asc
Read more about isnull on MSDN.
coalesce is an alternative and it's going to work in most RDBMS (afaik).
select *
from table
order by coalesce(UPDATE_DATE, CREATION_DATE) asc
qid & accept id:
(9153901, 9154036)
query:
Select max value within other select statement and display also a relevant field from the nested select
soup:
You don't even need a subquery:
\nSELECT COUNT(bc.taken) AS mn\n , b.title\nFROM books_clients AS bc\n JOIN books b \n ON b.book_id = bc.book_id\nGROUP BY b.title\nORDER BY mn DESC\nLIMIT 1\n
\nIf there are more than one results with same Max count, then you need a subquery:
\nSELECT allb.mn\n , allb.title\nFROM \n ( SELECT COUNT(bc.taken) AS mn\n FROM books_clients AS bc\n JOIN books b \n ON b.book_id = bc.book_id\n GROUP BY b.title\n ORDER BY mn DESC\n LIMIT 1\n ) AS maxb\n JOIN\n ( SELECT COUNT(bc.taken) AS mn\n , b.title\n FROM books_clients AS bc\n JOIN books b \n ON b.book_id = bc.book_id\n GROUP BY b.title\n ) AS allb\n ON allb.mn = maxb.man\n
\n
soup wrap:
You don't even need a subquery:
SELECT COUNT(bc.taken) AS mn
, b.title
FROM books_clients AS bc
JOIN books b
ON b.book_id = bc.book_id
GROUP BY b.title
ORDER BY mn DESC
LIMIT 1
If there are more than one results with same Max count, then you need a subquery:
SELECT allb.mn
, allb.title
FROM
( SELECT COUNT(bc.taken) AS mn
FROM books_clients AS bc
JOIN books b
ON b.book_id = bc.book_id
GROUP BY b.title
ORDER BY mn DESC
LIMIT 1
) AS maxb
JOIN
( SELECT COUNT(bc.taken) AS mn
, b.title
FROM books_clients AS bc
JOIN books b
ON b.book_id = bc.book_id
GROUP BY b.title
) AS allb
ON allb.mn = maxb.man
qid & accept id:
(9172621, 9173022)
query:
Enumerate in postgresql
soup:
I'm not sure what you're asking for. The "row number in points group" is a straight forward window function application but I don't know what "array of ids" means.
\nGiven date like this:
\n id | player_id | game_id | points \n----+-----------+---------+--------\n 1 | 1 | 1 | 0\n 2 | 1 | 2 | 1\n 3 | 1 | 3 | 5\n 4 | 2 | 1 | 1\n 5 | 2 | 2 | 0\n 6 | 2 | 3 | 0\n 7 | 3 | 1 | 2\n 8 | 3 | 2 | 3\n 9 | 3 | 3 | 1\n
\nYou can get the per-game rankings with this:
\nselect game_id, player_id, points,\n rank() over (partition by game_id order by points desc)\nfrom players\n
\nThat will give you output like this:
\n game_id | player_id | points | rank \n---------+-----------+--------+------\n 1 | 3 | 2 | 1\n 1 | 2 | 1 | 2\n 1 | 1 | 0 | 3\n 2 | 3 | 3 | 1\n 2 | 1 | 1 | 2\n 2 | 2 | 0 | 3\n 3 | 1 | 5 | 1\n 3 | 3 | 1 | 2\n 3 | 2 | 0 | 3\n
\n
soup wrap:
I'm not sure what you're asking for. The "row number in points group" is a straight forward window function application but I don't know what "array of ids" means.
Given date like this:
id | player_id | game_id | points
----+-----------+---------+--------
1 | 1 | 1 | 0
2 | 1 | 2 | 1
3 | 1 | 3 | 5
4 | 2 | 1 | 1
5 | 2 | 2 | 0
6 | 2 | 3 | 0
7 | 3 | 1 | 2
8 | 3 | 2 | 3
9 | 3 | 3 | 1
You can get the per-game rankings with this:
select game_id, player_id, points,
rank() over (partition by game_id order by points desc)
from players
That will give you output like this:
game_id | player_id | points | rank
---------+-----------+--------+------
1 | 3 | 2 | 1
1 | 2 | 1 | 2
1 | 1 | 0 | 3
2 | 3 | 3 | 1
2 | 1 | 1 | 2
2 | 2 | 0 | 3
3 | 1 | 5 | 1
3 | 3 | 1 | 2
3 | 2 | 0 | 3
qid & accept id:
(9197597, 9198019)
query:
How to determine first instance of multiple items in a table
soup:
As far as I know, MySQL can only do this using a correlated sub-query, or joining on a sub-query...
\n
\nCorrelated-Sub-Query:
\nSELECT\n count(browser), browser\nFROM\n access\nWHERE\n date = (SELECT MIN(date) FROM access AS lookup WHERE ip = access.ip)\n AND date > '2011-11-1'\n AND date < '2011-12-1' \nGROUP BY\n browser\n
\n
\nSub-Query:
\nSELECT\n count(access.browser), access.browser\nFROM\n (SELECT ip, MIN(date) AS date FROM access GROUP BY ip) AS lookup\nINNER JOIN\n access\n ON access.ip = lookup.ip\n AND access.date = lookup.date\nWHERE\n lookup.date > '2011-11-1'\n AND lookup.date < '2011-12-1' \nGROUP BY\n access.browser\n
\nEither way, be sue to have an index on (ip, date)
\n
soup wrap:
As far as I know, MySQL can only do this using a correlated sub-query, or joining on a sub-query...
Correlated-Sub-Query:
SELECT
count(browser), browser
FROM
access
WHERE
date = (SELECT MIN(date) FROM access AS lookup WHERE ip = access.ip)
AND date > '2011-11-1'
AND date < '2011-12-1'
GROUP BY
browser
Sub-Query:
SELECT
count(access.browser), access.browser
FROM
(SELECT ip, MIN(date) AS date FROM access GROUP BY ip) AS lookup
INNER JOIN
access
ON access.ip = lookup.ip
AND access.date = lookup.date
WHERE
lookup.date > '2011-11-1'
AND lookup.date < '2011-12-1'
GROUP BY
access.browser
Either way, be sue to have an index on (ip, date)
qid & accept id:
(9206962, 9207002)
query:
Oracle SQL - Using joins to find values in one table, and not another
soup:
SubSELECTs are fine when used appropriately... "someone does not like something" alone is not a good enough reason IMHO.
\nThere are several options - just 2 as examples:
\nSELECT nums.number FROM nums \nLEFT OUTER JOIN even ON even.number = nums.number \nWHERE even.number IS NULL\n
\nOR
\nSELECT nums.number FROM nums\nMINUS\nSELECT even.number FROM even\n
\n
soup wrap:
SubSELECTs are fine when used appropriately... "someone does not like something" alone is not a good enough reason IMHO.
There are several options - just 2 as examples:
SELECT nums.number FROM nums
LEFT OUTER JOIN even ON even.number = nums.number
WHERE even.number IS NULL
OR
SELECT nums.number FROM nums
MINUS
SELECT even.number FROM even
qid & accept id:
(9218949, 9219440)
query:
Query places that have common tags in database
soup:
You can use this query to produce results below:
\nselect p1.name, p2.name, t.name\nfrom places p1\njoin placestags pt1 on p1.id=pt1.placeid\njoin placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid <> p1.id\njoin places p2 on pt2.placeid=p2.id\njoin tags t on t.id=pt1.tagid\norder by p1.id, t.id\n
\nThis does not get everything into a single row like you wanted (you'd need a pivot for that, and I don't think sqlite has it), but it lets you see what is going on. Here is what you'd get from this query:
\nPlace1 | Place2 | Shared_Tag\n------------|----------------|-----------\nMcDonalds Burger King Burgers\nMcDonalds Burger King Fries\nBurger King McDonalds Burgers\nBurger King McDonalds Fries\n
\nEDIT (in response to a comment):
\nIf you are looking to shorten the query time, try reducing the number of joins, and remove the symmetric duplicates, like this:
\nselect pt1.placeid, pt2.placeid, pt1.tagid\nfrom placestags pt1\njoin placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid > pt1.placeid\norder by pt1.placeid, pt1.tagid\n
\n
soup wrap:
You can use this query to produce results below:
select p1.name, p2.name, t.name
from places p1
join placestags pt1 on p1.id=pt1.placeid
join placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid <> p1.id
join places p2 on pt2.placeid=p2.id
join tags t on t.id=pt1.tagid
order by p1.id, t.id
This does not get everything into a single row like you wanted (you'd need a pivot for that, and I don't think sqlite has it), but it lets you see what is going on. Here is what you'd get from this query:
Place1 | Place2 | Shared_Tag
------------|----------------|-----------
McDonalds Burger King Burgers
McDonalds Burger King Fries
Burger King McDonalds Burgers
Burger King McDonalds Fries
EDIT (in response to a comment):
If you are looking to shorten the query time, try reducing the number of joins, and remove the symmetric duplicates, like this:
select pt1.placeid, pt2.placeid, pt1.tagid
from placestags pt1
join placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid > pt1.placeid
order by pt1.placeid, pt1.tagid
qid & accept id:
(9237650, 9237695)
query:
Sort Days of the Week in SQL
soup:
If you are stuck with the data as is, I would recommend that you add an ORDER BY clause. Within the ORDER BY clause you will want to map each distinct value to a numeric value.
\ne.g., Using IIf
\nSELECT Slot.Day\nFROM Slot\nGROUP BY Slot.Day\nORDER BY IIf(Slot.Day = "Monday", 1,\n IIf(Slot.Day = "Tuesday", 2,\n IIf(Slot.Day = "Wednesday", 3,\n IIf(Slot.Day = "Thursday", 4,\n IIf(Slot.Day = "Friday", 5)))));\n
\ne.g., Using SWITCH
\nSELECT Slot.Day\nFROM Slot\nGROUP BY Slot.Day\nORDER BY SWITCH(Slot.Day = 'Monday', 1,\n Slot.Day = 'Tuesday', 2,\n Slot.Day = 'Wednesday', 3,\n Slot.Day = 'Thursday', 4,\n Slot.Day = 'Friday', 5);\n
\n
soup wrap:
If you are stuck with the data as is, I would recommend that you add an ORDER BY clause. Within the ORDER BY clause you will want to map each distinct value to a numeric value.
e.g., Using IIf
SELECT Slot.Day
FROM Slot
GROUP BY Slot.Day
ORDER BY IIf(Slot.Day = "Monday", 1,
IIf(Slot.Day = "Tuesday", 2,
IIf(Slot.Day = "Wednesday", 3,
IIf(Slot.Day = "Thursday", 4,
IIf(Slot.Day = "Friday", 5)))));
e.g., Using SWITCH
SELECT Slot.Day
FROM Slot
GROUP BY Slot.Day
ORDER BY SWITCH(Slot.Day = 'Monday', 1,
Slot.Day = 'Tuesday', 2,
Slot.Day = 'Wednesday', 3,
Slot.Day = 'Thursday', 4,
Slot.Day = 'Friday', 5);
qid & accept id:
(9288893, 9289059)
query:
SQL: ORDER BY based on two columns of interlaced values
soup:
You can't. The order is not well defined
\nThe simple set
\n5 10\n7 null\nnull 8\n
\ncan be sorted
\nnull 8\n5 10\n7 null\n
\nand
\n5 10\n7 null\nnull 8\n
\ndepending on where you start sorting.
\nIf possible I would change the sort criteria to "X if available, otherwise Y". Then you could use the COALSECE operator as suggested by "mu is too short". (order by coalesce(x, y))
\n
soup wrap:
You can't. The order is not well defined
The simple set
5 10
7 null
null 8
can be sorted
null 8
5 10
7 null
and
5 10
7 null
null 8
depending on where you start sorting.
If possible I would change the sort criteria to "X if available, otherwise Y". Then you could use the COALSECE operator as suggested by "mu is too short". (order by coalesce(x, y))
qid & accept id:
(9301321, 9301358)
query:
sql to find certain ids and fillins
soup:
You can use a UNION to join the records WHERE id IN (1, 2) and then the second query is your random record.
\nSELECT *\nFROM table\nWHERE id IN (1, 2)\n\nUNION\n\nSELECT Top 1 *\nFROM table\n
\nIf you provide more details about your query, then I can provide a more detailed answer.
\nEdit:\nBased on your comment you should be able to do something this like:
\nSELECT * \nFROM list_cards \nWHERE card_id IN (1, 2) AND qty > 0\n\nUNION\n\nSELECT * \nFROM list_cards \nWHERE qty > 0\n
\nIf you want to be sure you always get 3 results:
\nSELECT TOP 3 C.*\nFROM\n(\n SELECT C.*, '1' as Priority\n FROM list_cards C\n WHERE C.card_id IN (1, 2) AND qty > 0\n\n UNION\n\n SELECT C.*, '2' as Priority\n FROM list_cards C\n WHERE qty > 0\n) C\nORDER BY C.Priority\n
\n
soup wrap:
You can use a UNION to join the records WHERE id IN (1, 2) and then the second query is your random record.
SELECT *
FROM table
WHERE id IN (1, 2)
UNION
SELECT Top 1 *
FROM table
If you provide more details about your query, then I can provide a more detailed answer.
Edit:
Based on your comment you should be able to do something this like:
SELECT *
FROM list_cards
WHERE card_id IN (1, 2) AND qty > 0
UNION
SELECT *
FROM list_cards
WHERE qty > 0
If you want to be sure you always get 3 results:
SELECT TOP 3 C.*
FROM
(
SELECT C.*, '1' as Priority
FROM list_cards C
WHERE C.card_id IN (1, 2) AND qty > 0
UNION
SELECT C.*, '2' as Priority
FROM list_cards C
WHERE qty > 0
) C
ORDER BY C.Priority
qid & accept id:
(9355066, 9355094)
query:
A MySQL query addressing three tables: How many from A are not in B or C?
soup:
If you want no ads in either table, then the sort of query you are after is:
\nSELECT id\nFROM members\nWHERE id NOT IN ( any id from any other table )\n
\nTo select ids from other tables:
\nSELECT id\nFROM \n
\nHence:
\nSELECT id\nFROM members\nWHERE id NOT IN (SELECT id FROM dog_shareoffered)\n AND id NOT IN (SELECT id FROM dog_sharewanted)\n
\nI added the 'SELECT DISTINCT' because one member may put in many ads, but there's only one id. I used to have a SELECT DISTINCT in the subqueries above but as comments below mention, this is not necessary.
\nIf you wanted to avoid a sub-query (a possible performance increase, depending..) you could use some LEFT JOINs:
\nSELECT members.id\nFROM members\nLEFT JOIN dog_shareoffered\n ON dog_shareoffered.id = members.id\nLEFT JOIN dog_sharewanted\n ON dog_sharewanted.id = members.id\nWHERE dog_shareoffered.id IS NULL\n AND dog_sharewanted.id IS NULL\n
\nWhy this works:
\nIt takes the table members and joins it to the other two tables on the id column.\nThe LEFT JOIN means that if a member exists in the members table but not the table we're joining to (e.g. dog_shareoffered), then the corresponding dog_shareoffered columns will have NULL in them.
\nSo, the WHERE condition picks out rows where there's a NULL id in both dog_shareoffered and dog_sharewanted, meaning we've found ids in members with no corresponding id in the other two tables.
\n
soup wrap:
If you want no ads in either table, then the sort of query you are after is:
SELECT id
FROM members
WHERE id NOT IN ( any id from any other table )
To select ids from other tables:
SELECT id
FROM
Hence:
SELECT id
FROM members
WHERE id NOT IN (SELECT id FROM dog_shareoffered)
AND id NOT IN (SELECT id FROM dog_sharewanted)
I added the 'SELECT DISTINCT' because one member may put in many ads, but there's only one id. I used to have a SELECT DISTINCT in the subqueries above but as comments below mention, this is not necessary.
If you wanted to avoid a sub-query (a possible performance increase, depending..) you could use some LEFT JOINs:
SELECT members.id
FROM members
LEFT JOIN dog_shareoffered
ON dog_shareoffered.id = members.id
LEFT JOIN dog_sharewanted
ON dog_sharewanted.id = members.id
WHERE dog_shareoffered.id IS NULL
AND dog_sharewanted.id IS NULL
Why this works:
It takes the table members and joins it to the other two tables on the id column.
The LEFT JOIN means that if a member exists in the members table but not the table we're joining to (e.g. dog_shareoffered), then the corresponding dog_shareoffered columns will have NULL in them.
So, the WHERE condition picks out rows where there's a NULL id in both dog_shareoffered and dog_sharewanted, meaning we've found ids in members with no corresponding id in the other two tables.
qid & accept id:
(9356686, 9356787)
query:
mysql query for related articles
soup:
Try
\nselect a.* from Article a\ninner join ArticleTag at\n on at.idArticle = a.idArticle\nwhere at.idTag in (select idTag from ArticleTag where idArticle =5)\n
\nor
\nselect a.* from Article a\ninner join ArticleTag at on at.idArticle= a.idArticle\ninner join ArticleTag at2 on at2.idTag = a.idTag and at2.IdArticle! = at.idArticle\nwhere at2.idArticle = 5\n
\n
soup wrap:
Try
select a.* from Article a
inner join ArticleTag at
on at.idArticle = a.idArticle
where at.idTag in (select idTag from ArticleTag where idArticle =5)
or
select a.* from Article a
inner join ArticleTag at on at.idArticle= a.idArticle
inner join ArticleTag at2 on at2.idTag = a.idTag and at2.IdArticle! = at.idArticle
where at2.idArticle = 5
qid & accept id:
(9394879, 9395523)
query:
Sum totals for columns
soup:
This is going to look complicated, but bear with me. It needs some clarification on what is meant by others/rate however the principle is sound. If you have a primary key on financies that you can use then a more elegant (GROUP BY ... ROLLUP) solution may be viable however I've not sufficient experience with that to offer reliable advice. Here goes how I would address the issue.
\nLong-winded option
\n(\n SELECT\n financesTallied.date,\n financesTallied.rate,\n financesTallied.supply_fee,\n financesTallied.demand_fee,\n financesTallied.charged_fee,\n financesTallied.total_costs,\n financesTallied.net_return\n\n FROM (\n\n SELECT\n financeWithNetReturn.*,\n @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,\n @demandFee := @demandFee + financeWithNetReturn.demand_fee,\n @charedFee := @charedFee + financeWithNetReturn.charged_fee\n FROM \n ( // Calculate net return based off total costs\n SELECT \n financeData.*,\n financeData.supply_fee - financeData.total_costs AS net_return\n FROM \n ( // Select the data\n SELECT\n date, \n rate, \n supply_fee, \n demand_fee, \n charged_fee,\n (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate\n FROM financies\n WHERE date BETWEEN '2010-01-10' AND '2011-01-01'\n ORDER BY date ASC\n ) AS financeData\n ) AS financeWithNetReturn,\n (\n SELECT\n @supplyFee := 0\n @demandFee := 0\n @charedFee := 0\n ) AS variableInit\n ) AS financesTallied\n) UNION (\n SELECT\n '*Total*',\n NULL,\n @supplyFee,\n @demandFee,\n @chargedFee,\n NULL,\n NULL\n)\n
\nWorking from the innermost query to the outermost. This query selects the basic fees and calculates the total_costs for this row. This total_costs formula will need adjustment as I'm not 100% clear on what you were looking for there. Will refer to this as [SQ1]
\n SELECT\n date, \n rate, \n supply_fee, \n demand_fee, \n charged_fee,\n (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate\n FROM financies\n WHERE date BETWEEN '2010-01-10' AND '2011-01-01'\n ORDER BY date ASC\n
\nNext level up I'm just reusing the calculated total_costs column with the supply_fee column to add in a net_return column. This concludes the basic data you need per-row, will refer to this as [SQL2]
\n SELECT \n financeData.*,\n financeData.supply_fee - financeData.total_costs AS net_return\n FROM \n ([SQ1]) AS financeData\n
\nAt this level it's time to start tallying up the values, so need to initialise the variables required with 0 values ([SQL3])
\n SELECT\n @supplyFee := 0\n @demandFee := 0\n @charedFee := 0 \n
\nNext level up, I'm using the calculated rows to calculate the totals ([SQL4])
\n SELECT\n financeWithNetReturn.*,\n @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,\n @demandFee := @demandFee + financeWithNetReturn.demand_fee,\n @charedFee := @charedFee + financeWithNetReturn.charged_fee\n FROM \n ([SQL2]) AS financeWithNetReturn,\n ([SQL3]) AS variableInit\n
\nNow finally at the top level, just need to output the desired columns without the calculated columns ([SQL5])
\nSELECT\n financesTallied.date,\n financesTallied.rate,\n financesTallied.supply_fee,\n financesTallied.demand_fee,\n financesTallied.charged_fee,\n financesTallied.total_costs,\n financesTallied.net_return\n\nFROM ([SQL4]) AS financesTallied\n
\nAnd then output it UNIONED with a totals row
\n([SQL5]) UNION (\n SELECT\n '*Total*',\n NULL,\n @supplyFee,\n @demandFee,\n @chargedFee,\n NULL,\n NULL\n)\n
\n
soup wrap:
This is going to look complicated, but bear with me. It needs some clarification on what is meant by others/rate however the principle is sound. If you have a primary key on financies that you can use then a more elegant (GROUP BY ... ROLLUP) solution may be viable however I've not sufficient experience with that to offer reliable advice. Here goes how I would address the issue.
Long-winded option
(
SELECT
financesTallied.date,
financesTallied.rate,
financesTallied.supply_fee,
financesTallied.demand_fee,
financesTallied.charged_fee,
financesTallied.total_costs,
financesTallied.net_return
FROM (
SELECT
financeWithNetReturn.*,
@supplyFee := @supplyFee + financeWithNetReturn.supply_fee,
@demandFee := @demandFee + financeWithNetReturn.demand_fee,
@charedFee := @charedFee + financeWithNetReturn.charged_fee
FROM
( // Calculate net return based off total costs
SELECT
financeData.*,
financeData.supply_fee - financeData.total_costs AS net_return
FROM
( // Select the data
SELECT
date,
rate,
supply_fee,
demand_fee,
charged_fee,
(supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate
FROM financies
WHERE date BETWEEN '2010-01-10' AND '2011-01-01'
ORDER BY date ASC
) AS financeData
) AS financeWithNetReturn,
(
SELECT
@supplyFee := 0
@demandFee := 0
@charedFee := 0
) AS variableInit
) AS financesTallied
) UNION (
SELECT
'*Total*',
NULL,
@supplyFee,
@demandFee,
@chargedFee,
NULL,
NULL
)
Working from the innermost query to the outermost. This query selects the basic fees and calculates the total_costs for this row. This total_costs formula will need adjustment as I'm not 100% clear on what you were looking for there. Will refer to this as [SQ1]
SELECT
date,
rate,
supply_fee,
demand_fee,
charged_fee,
(supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate
FROM financies
WHERE date BETWEEN '2010-01-10' AND '2011-01-01'
ORDER BY date ASC
Next level up I'm just reusing the calculated total_costs column with the supply_fee column to add in a net_return column. This concludes the basic data you need per-row, will refer to this as [SQL2]
SELECT
financeData.*,
financeData.supply_fee - financeData.total_costs AS net_return
FROM
([SQ1]) AS financeData
At this level it's time to start tallying up the values, so need to initialise the variables required with 0 values ([SQL3])
SELECT
@supplyFee := 0
@demandFee := 0
@charedFee := 0
Next level up, I'm using the calculated rows to calculate the totals ([SQL4])
SELECT
financeWithNetReturn.*,
@supplyFee := @supplyFee + financeWithNetReturn.supply_fee,
@demandFee := @demandFee + financeWithNetReturn.demand_fee,
@charedFee := @charedFee + financeWithNetReturn.charged_fee
FROM
([SQL2]) AS financeWithNetReturn,
([SQL3]) AS variableInit
Now finally at the top level, just need to output the desired columns without the calculated columns ([SQL5])
SELECT
financesTallied.date,
financesTallied.rate,
financesTallied.supply_fee,
financesTallied.demand_fee,
financesTallied.charged_fee,
financesTallied.total_costs,
financesTallied.net_return
FROM ([SQL4]) AS financesTallied
And then output it UNIONED with a totals row
([SQL5]) UNION (
SELECT
'*Total*',
NULL,
@supplyFee,
@demandFee,
@chargedFee,
NULL,
NULL
)
qid & accept id:
(9403894, 9403921)
query:
Sort by a particular value
soup:
Use the FIND_IN_SET function:
\nhttp://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_find-in-set
\nCode will look like this:
\nORDER BY FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')\n
\n
\nHere is how I would rewrite your SQL query:
\nSELECT\n cl.id, cl.lead_id, cl.client_name, \n po.id, po.carrier,\n pa.downpayment_time, pa.status, pa.policy_id\nFROM\n pdp_client_info AS cl\n JOIN pdp_policy_info AS po ON (cl.id = po.id)\n JOIN pdp_payment AS pa ON (po.id = pa.policy_id)\nWHERE\n (pa.downpayment_date = '$current_date')\n AND (pa.status IN ('pending', 'failed', 'application', 'submitted', 'canceled'))\nORDER BY\n FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')\n
\n
soup wrap:
Use the FIND_IN_SET function:
http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_find-in-set
Code will look like this:
ORDER BY FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')
Here is how I would rewrite your SQL query:
SELECT
cl.id, cl.lead_id, cl.client_name,
po.id, po.carrier,
pa.downpayment_time, pa.status, pa.policy_id
FROM
pdp_client_info AS cl
JOIN pdp_policy_info AS po ON (cl.id = po.id)
JOIN pdp_payment AS pa ON (po.id = pa.policy_id)
WHERE
(pa.downpayment_date = '$current_date')
AND (pa.status IN ('pending', 'failed', 'application', 'submitted', 'canceled'))
ORDER BY
FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')
qid & accept id:
(9414038, 9414525)
query:
Join between two master and one detail Table
soup:
Haven't got access to Northwind right now, so this is untested, but you should get the idea...
\nThere's no need to get data about employees if all you want is sales per region. Your subquery is therefore redundant...
\nselect\n r.RegionDescription, \n sum(OD.Quantity*OD.UnitPrice)\nfrom\n Region R\n inner join Territories T\n on R.RegionID=T.RegionID\n inner join EmployeeTerritories ET\n on T.TerritoryID=ET.TerritoryID\n inner join Employees E\n on ET.EmployeeID=E.EmployeeID\n inner join Orders O\n on E.EmployeeID=o.EmployeeID\n inner join [Order Details] OD\n on o.OrderID=OD.OrderID\n Group by r.RegionDescription\n
\nAs discussed in the comments, this "double counts" sales where an employee is assigned to more than one region. In many cases, this is desired behaviour - if you want to know how well a region is doing, you need to know how many sales came from that region, and if an employee is assigned to more than one region, that doesn't affect the region's performance.
\nHowever, it means you overstate the sales if you add up all the regions.
\nThere are two strategies to avoid this. One is to assign the sale to just one region; in the comments, you say there's no data on which to make that decision, so you could do it on the "lowest regionID) - something like:
\nselect\n r.RegionDescription, \n sum(OD.Quantity*OD.UnitPrice)\nfrom\n Region R\n inner join Territories T\n on R.RegionID=T.RegionID\n inner join EmployeeTerritories ET\n on T.TerritoryID=ET.TerritoryID\n inner join Employees E\n on ET.EmployeeID=E.EmployeeID\n inner join Orders O\n on E.EmployeeID=o.EmployeeID\n inner join [Order Details] OD\n on o.OrderID=OD.OrderID\n Group by r.RegionDescription\n having et.TerritoryID = min(territoryID)\n
\n(again, no access to DB, so can't test - but this should filter out duplicates).
\nAlternatively, you can assign a proportion of the sale to each region - though then rounding may cause the totals not to add up properly. That's a query I'd like to try before posting though!
\n
soup wrap:
Haven't got access to Northwind right now, so this is untested, but you should get the idea...
There's no need to get data about employees if all you want is sales per region. Your subquery is therefore redundant...
select
r.RegionDescription,
sum(OD.Quantity*OD.UnitPrice)
from
Region R
inner join Territories T
on R.RegionID=T.RegionID
inner join EmployeeTerritories ET
on T.TerritoryID=ET.TerritoryID
inner join Employees E
on ET.EmployeeID=E.EmployeeID
inner join Orders O
on E.EmployeeID=o.EmployeeID
inner join [Order Details] OD
on o.OrderID=OD.OrderID
Group by r.RegionDescription
As discussed in the comments, this "double counts" sales where an employee is assigned to more than one region. In many cases, this is desired behaviour - if you want to know how well a region is doing, you need to know how many sales came from that region, and if an employee is assigned to more than one region, that doesn't affect the region's performance.
However, it means you overstate the sales if you add up all the regions.
There are two strategies to avoid this. One is to assign the sale to just one region; in the comments, you say there's no data on which to make that decision, so you could do it on the "lowest regionID) - something like:
select
r.RegionDescription,
sum(OD.Quantity*OD.UnitPrice)
from
Region R
inner join Territories T
on R.RegionID=T.RegionID
inner join EmployeeTerritories ET
on T.TerritoryID=ET.TerritoryID
inner join Employees E
on ET.EmployeeID=E.EmployeeID
inner join Orders O
on E.EmployeeID=o.EmployeeID
inner join [Order Details] OD
on o.OrderID=OD.OrderID
Group by r.RegionDescription
having et.TerritoryID = min(territoryID)
(again, no access to DB, so can't test - but this should filter out duplicates).
Alternatively, you can assign a proportion of the sale to each region - though then rounding may cause the totals not to add up properly. That's a query I'd like to try before posting though!
qid & accept id:
(9419615, 19190385)
query:
SQL query on many-to-many with redundant constraint
soup:
Well I think I wasn't very clear in my description. But the solution I found out is to proceed by steps, without using only SQL, but with PHP.
\nI do a first reasearch with the first criterium :
\nwhere Topics.PK_TOPICS=8\n
\nI get the result in a PHP array. Then, a second one, with the second criterium :
\nwhere Topics.PK_TOPICS=15\n
\nI get the results in another, temporary PHP array.\nAnd then I use PHP array_intersect() :
\n$results = array_intersect($results, $temp_results);\n
\nto find only the results that are matching both criteriums. Obviously, I can reuse $results to intersect as many time as I want. So, no limits to search criteriums.
\nHope this help…
\n
soup wrap:
Well I think I wasn't very clear in my description. But the solution I found out is to proceed by steps, without using only SQL, but with PHP.
I do a first reasearch with the first criterium :
where Topics.PK_TOPICS=8
I get the result in a PHP array. Then, a second one, with the second criterium :
where Topics.PK_TOPICS=15
I get the results in another, temporary PHP array.
And then I use PHP array_intersect() :
$results = array_intersect($results, $temp_results);
to find only the results that are matching both criteriums. Obviously, I can reuse $results to intersect as many time as I want. So, no limits to search criteriums.
Hope this help…
qid & accept id:
(9429371, 9429424)
query:
Sql Query to count same date entries
soup:
The reason you get what you get is because you also compare the time, down to a second apart. So any entries created the same second will be grouped together.
\nTo achieve what you actually want, you need to apply a date function to the created_at column:
\nSELECT COUNT(1) AS entries, DATE(created_at) as date\nFROM wp_frm_items\nWHERE user_id =1\nGROUP BY DATE(created_at)\nLIMIT 0 , 30\n
\nThis would remove the time part from the column field, and so group together any entries created on the same day. You could take this further by removing the day part to group entries created on the same month of the same year etc.
\nTo restrict the query to entries created in the current month, you add a WHERE-clause to the query to only select entries that satisfy that condition. Here's an example:
\nSELECT COUNT(1) AS entries, DATE(created_at) as date \nFROM wp_frm_items\nWHERE user_id = 1 \n AND created_at >= DATE_FORMAT(CURDATE(),'%Y-%m-01') \nGROUP BY DATE(created_at)\n
\nNote: The COUNT(1)-part of the query simply means Count each row, and you could just as well have written COUNT(*), COUNT(id) or any other field. Historically, the most efficient approach was to count the primary key, since that is always available in whatever index the query engine could utilize. COUNT(*) used to have to leave the index and retrieve the corresponding row in the table, which was sometimes inefficient. In more modern query planners this is probably no longer the case. COUNT(1) is another variant of this that didn't force the query planner to retrieve the rows from the table.
\nEdit: The query to group by month can be created in a number of different ways. Here is an example:
\nSELECT COUNT(1) AS entries, DATE_FORMAT(created_at,'%Y-%c') as month\nFROM wp_frm_items\nWHERE user_id =1\nGROUP BY DATE_FORMAT(created_at,'%Y-%c')\n
\n
soup wrap:
The reason you get what you get is because you also compare the time, down to a second apart. So any entries created the same second will be grouped together.
To achieve what you actually want, you need to apply a date function to the created_at column:
SELECT COUNT(1) AS entries, DATE(created_at) as date
FROM wp_frm_items
WHERE user_id =1
GROUP BY DATE(created_at)
LIMIT 0 , 30
This would remove the time part from the column field, and so group together any entries created on the same day. You could take this further by removing the day part to group entries created on the same month of the same year etc.
To restrict the query to entries created in the current month, you add a WHERE-clause to the query to only select entries that satisfy that condition. Here's an example:
SELECT COUNT(1) AS entries, DATE(created_at) as date
FROM wp_frm_items
WHERE user_id = 1
AND created_at >= DATE_FORMAT(CURDATE(),'%Y-%m-01')
GROUP BY DATE(created_at)
Note: The COUNT(1)-part of the query simply means Count each row, and you could just as well have written COUNT(*), COUNT(id) or any other field. Historically, the most efficient approach was to count the primary key, since that is always available in whatever index the query engine could utilize. COUNT(*) used to have to leave the index and retrieve the corresponding row in the table, which was sometimes inefficient. In more modern query planners this is probably no longer the case. COUNT(1) is another variant of this that didn't force the query planner to retrieve the rows from the table.
Edit: The query to group by month can be created in a number of different ways. Here is an example:
SELECT COUNT(1) AS entries, DATE_FORMAT(created_at,'%Y-%c') as month
FROM wp_frm_items
WHERE user_id =1
GROUP BY DATE_FORMAT(created_at,'%Y-%c')
qid & accept id:
(9432630, 9433395)
query:
String concatenation in SQL server
soup:
In case you need to do this as a set and not one row at a time. Given the following split function:
\nUSE tempdb;\nGO\nCREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))\nRETURNS TABLE\nAS\n RETURN ( SELECT Item FROM\n ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')\n FROM ( SELECT [XML] = CONVERT(XML, ''\n + REPLACE(@List,',', '') + '').query('.')\n ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y\n WHERE Item IS NOT NULL\n );\nGO\n
\nThen with the following table and sample data, and string variable, you can get all of the results this way:
\nDECLARE @foo TABLE(ID INT IDENTITY(1,1), col NVARCHAR(MAX));\n\nINSERT @foo(col) SELECT N'c,d,e,f,g';\nINSERT @foo(col) SELECT N'c,e,b';\nINSERT @foo(col) SELECT N'd,e,f,x,a,e';\n\nDECLARE @string NVARCHAR(MAX) = N'a,b,c,d';\n\n;WITH x AS\n(\n SELECT f.ID, c.Item FROM @foo AS f\n CROSS APPLY dbo.SplitStrings(f.col) AS c\n), y AS\n(\n SELECT ID, Item FROM x\n UNION\n SELECT x.ID, s.Item\n FROM dbo.SplitStrings(@string) AS s\n CROSS JOIN x\n)\nSELECT DISTINCT ID, Items = STUFF((SELECT ',' + Item \n FROM y AS y2 WHERE y2.ID = y.ID \n FOR XML PATH(''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 1, N'')\nFROM y;\n
\nResults:
\nID Items\n-- ----------\n 1 a,b,c,d,e,f,g\n 2 a,b,c,d,e\n 3 a,b,c,d,e,f,x\n
\nNow that all said, what you really should do is follow the previous advice and store these things in a related table in the first place. You can use the same type of splitting methodology to store the strings separately whenever an insert or update happens, instead of just dumping the CSV into a single column, and your applications shouldn't really have to change the way they're passing data into your procedures. But it sure will be easier to get the data out!
\nEDIT
\nAdding a potential solution for SQL Server 2008 that is a bit more convoluted but gets things done with one less loop (using a massive table scan and replace instead). I don't think this is any better than the solution above, and it is certainly less maintainable, but it is an option to test out should you find you are able to upgrade to 2008 or better (and also for any 2008+ users who come across this question).
\nSET NOCOUNT ON;\n\n-- let's pretend this is our static table:\n\nCREATE TABLE #x\n(\n ID INT IDENTITY(1,1),\n col NVARCHAR(MAX)\n);\n\nINSERT #x(col) VALUES(N'c,d,e,f,g'), (N'c,e,b'), (N'd,e,f,x,a,e');\n\n-- and here is our parameter:\n\nDECLARE @string NVARCHAR(MAX) = N'a,b,c,d';\n
\nThe code:
\nDECLARE @sql NVARCHAR(MAX) = N'DECLARE @src TABLE(ID INT, col NVARCHAR(32));\n DECLARE @dest TABLE(ID INT, col NVARCHAR(32));';\n\nSELECT @sql += '\n INSERT @src VALUES(' + RTRIM(ID) + ','''\n + REPLACE(col, ',', '''),(' + RTRIM(ID) + ',''') + ''');'\nFROM #x;\n\nSELECT @sql += '\n INSERT @dest VALUES(' + RTRIM(ID) + ','''\n + REPLACE(@string, ',', '''),(' + RTRIM(ID) + ',''') + ''');'\nFROM #x;\n\nSELECT @sql += '\n WITH x AS (SELECT ID, col FROM @src UNION SELECT ID, col FROM @dest)\n SELECT DISTINCT ID, Items = STUFF((SELECT '','' + col\n FROM x AS x2 WHERE x2.ID = x.ID FOR XML PATH('''')), 1, 1, N'''')\n FROM x;'\n\nEXEC sp_executesql @sql;\nGO\nDROP TABLE #x;\n
\nThis is much trickier to do in 2005 (though not impossible) because you need to change the VALUES() clauses to UNION ALL...
\n
soup wrap:
In case you need to do this as a set and not one row at a time. Given the following split function:
USE tempdb;
GO
CREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))
RETURNS TABLE
AS
RETURN ( SELECT Item FROM
( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, ''
+ REPLACE(@List,',', '') + '').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Then with the following table and sample data, and string variable, you can get all of the results this way:
DECLARE @foo TABLE(ID INT IDENTITY(1,1), col NVARCHAR(MAX));
INSERT @foo(col) SELECT N'c,d,e,f,g';
INSERT @foo(col) SELECT N'c,e,b';
INSERT @foo(col) SELECT N'd,e,f,x,a,e';
DECLARE @string NVARCHAR(MAX) = N'a,b,c,d';
;WITH x AS
(
SELECT f.ID, c.Item FROM @foo AS f
CROSS APPLY dbo.SplitStrings(f.col) AS c
), y AS
(
SELECT ID, Item FROM x
UNION
SELECT x.ID, s.Item
FROM dbo.SplitStrings(@string) AS s
CROSS JOIN x
)
SELECT DISTINCT ID, Items = STUFF((SELECT ',' + Item
FROM y AS y2 WHERE y2.ID = y.ID
FOR XML PATH(''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 1, N'')
FROM y;
Results:
ID Items
-- ----------
1 a,b,c,d,e,f,g
2 a,b,c,d,e
3 a,b,c,d,e,f,x
Now that all said, what you really should do is follow the previous advice and store these things in a related table in the first place. You can use the same type of splitting methodology to store the strings separately whenever an insert or update happens, instead of just dumping the CSV into a single column, and your applications shouldn't really have to change the way they're passing data into your procedures. But it sure will be easier to get the data out!
EDIT
Adding a potential solution for SQL Server 2008 that is a bit more convoluted but gets things done with one less loop (using a massive table scan and replace instead). I don't think this is any better than the solution above, and it is certainly less maintainable, but it is an option to test out should you find you are able to upgrade to 2008 or better (and also for any 2008+ users who come across this question).
SET NOCOUNT ON;
-- let's pretend this is our static table:
CREATE TABLE #x
(
ID INT IDENTITY(1,1),
col NVARCHAR(MAX)
);
INSERT #x(col) VALUES(N'c,d,e,f,g'), (N'c,e,b'), (N'd,e,f,x,a,e');
-- and here is our parameter:
DECLARE @string NVARCHAR(MAX) = N'a,b,c,d';
The code:
DECLARE @sql NVARCHAR(MAX) = N'DECLARE @src TABLE(ID INT, col NVARCHAR(32));
DECLARE @dest TABLE(ID INT, col NVARCHAR(32));';
SELECT @sql += '
INSERT @src VALUES(' + RTRIM(ID) + ','''
+ REPLACE(col, ',', '''),(' + RTRIM(ID) + ',''') + ''');'
FROM #x;
SELECT @sql += '
INSERT @dest VALUES(' + RTRIM(ID) + ','''
+ REPLACE(@string, ',', '''),(' + RTRIM(ID) + ',''') + ''');'
FROM #x;
SELECT @sql += '
WITH x AS (SELECT ID, col FROM @src UNION SELECT ID, col FROM @dest)
SELECT DISTINCT ID, Items = STUFF((SELECT '','' + col
FROM x AS x2 WHERE x2.ID = x.ID FOR XML PATH('''')), 1, 1, N'''')
FROM x;'
EXEC sp_executesql @sql;
GO
DROP TABLE #x;
This is much trickier to do in 2005 (though not impossible) because you need to change the VALUES() clauses to UNION ALL...
qid & accept id:
(9459554, 9460084)
query:
Get the max value of a column from set of rows
soup:
I think this is the query you're looking for:
\nselect b.*, c.filenumber from b\njoin (\n select id, max(count) as count from a\n group by id\n) as NewA on b.id = NewA.id\njoin c on NewA.count = c.count\n
\nHowever, you should take into account that I don't get why for id=1 in tableA you choose the 16 to match against table C (which is the max) and for id=2 in tableA you choose the 10 to match against table C (which is the min). I assumed you meant the max in both cases.
\nEdit:
\nI see you've updated tableA data. The query results in this, given the previous data:
\n+----+---------------+------------+\n| ID | FILENAME | FILENUMBER |\n+----+---------------+------------+\n| 1 | sample1.file | 1234 |\n| 2 | sample2.file | 3456 |\n| 3 | sample3.file | 4567 |\n+----+---------------+------------+\n
\nHere is a working example
\n
soup wrap:
I think this is the query you're looking for:
select b.*, c.filenumber from b
join (
select id, max(count) as count from a
group by id
) as NewA on b.id = NewA.id
join c on NewA.count = c.count
However, you should take into account that I don't get why for id=1 in tableA you choose the 16 to match against table C (which is the max) and for id=2 in tableA you choose the 10 to match against table C (which is the min). I assumed you meant the max in both cases.
Edit:
I see you've updated tableA data. The query results in this, given the previous data:
+----+---------------+------------+
| ID | FILENAME | FILENUMBER |
+----+---------------+------------+
| 1 | sample1.file | 1234 |
| 2 | sample2.file | 3456 |
| 3 | sample3.file | 4567 |
+----+---------------+------------+
Here is a working example
qid & accept id:
(9475177, 9486410)
query:
SQL: Select transactions where rows are not of criteria inside the same table
soup:
Here is a solution based on nested subqueries. First, I added a few rows to catch a few more cases. Transaction 10, for example, should not be cancelled by transaction 12, because transaction 11 comes in between.
\n> select * from transactions order by date_time;\n+----+---------+------+---------------------+--------+\n| id | account | type | date_time | amount |\n+----+---------+------+---------------------+--------+\n| 1 | 1 | R | 2012-01-01 10:01:00 | 1000 |\n| 2 | 3 | R | 2012-01-02 12:53:10 | 1500 |\n| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 |\n| 4 | 2 | R | 2012-01-03 17:56:00 | 2000 |\n| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 |\n| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 |\n| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 |\n| 8 | 3 | R | 2012-01-05 12:12:00 | 1250 |\n| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 |\n| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 |\n| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 |\n| 12 | 3 | A | 2012-01-08 00:00:00 | -1250 |\n| 14 | 2 | R | 2012-01-09 00:00:00 | 2000 |\n| 13 | 3 | A | 2012-01-10 00:00:00 | -1500 |\n| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 |\n| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 |\n+----+---------+------+---------------------+--------+\n16 rows in set (0.00 sec)\n
\nFirst, create a query to grab, for each transaction, "the date of the most recent transaction before that one in the same account":
\nSELECT t2.*,\n MAX(t1.date_time) AS prev_date\nFROM transactions t1\nJOIN transactions t2\nON (t1.account = t2.account\n AND t2.date_time > t1.date_time)\nGROUP BY t2.account,t2.date_time\nORDER BY t2.date_time;\n\n+----+---------+------+---------------------+--------+---------------------+\n| id | account | type | date_time | amount | prev_date |\n+----+---------+------+---------------------+--------+---------------------+\n| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 | 2012-01-02 12:53:10 |\n| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 | 2012-01-01 10:01:00 |\n| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 | 2012-01-03 17:56:00 |\n| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 | 2012-01-03 13:10:01 |\n| 8 | 3 | R | 2012-01-05 12:12:00 | 1250 | 2012-01-04 15:13:10 |\n| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 | 2012-01-05 12:12:00 |\n| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 | 2012-01-06 17:24:01 |\n| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 | 2012-01-07 00:00:00 |\n| 12 | 3 | A | 2012-01-08 00:00:00 | -1250 | 2012-01-07 05:00:00 |\n| 14 | 2 | R | 2012-01-09 00:00:00 | 2000 | 2012-01-04 13:23:01 |\n| 13 | 3 | A | 2012-01-10 00:00:00 | -1500 | 2012-01-08 00:00:00 |\n| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 | 2012-01-09 00:00:00 |\n| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 | 2012-01-11 04:00:00 |\n+----+---------+------+---------------------+--------+---------------------+\n13 rows in set (0.00 sec)\n
\nUse that as a subquery to get each transaction and its predecessor on the same row. Use some filtering to pull out the transactions we're interested in - namely, 'A' transactions whose predecessors are 'R' transactions that they exactly cancel out -
\nSELECT\n t3.*,transactions.*\nFROM\n transactions\n JOIN\n (SELECT t2.*,\n MAX(t1.date_time) AS prev_date\n FROM transactions t1\n JOIN transactions t2\n ON (t1.account = t2.account\n AND t2.date_time > t1.date_time)\n GROUP BY t2.account,t2.date_time) t3\n ON t3.account = transactions.account\n AND t3.prev_date = transactions.date_time\n AND t3.type='A'\n AND transactions.type='R'\n AND t3.amount + transactions.amount = 0\n ORDER BY t3.date_time;\n\n\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n| id | account | type | date_time | amount | prev_date | id | account | type | date_time | amount |\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 | 2012-01-02 12:53:10 | 2 | 3 | R | 2012-01-02 12:53:10 | 1500 |\n| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 | 2012-01-03 17:56:00 | 4 | 2 | R | 2012-01-03 17:56:00 | 2000 |\n| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 | 2012-01-05 12:12:00 | 8 | 3 | R | 2012-01-05 12:12:00 | 1250 |\n| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 | 2012-01-09 00:00:00 | 14 | 2 | R | 2012-01-09 00:00:00 | 2000 |\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n4 rows in set (0.00 sec)\n
\nFrom the result above it's apparent we're almost there - we've identified the unwanted transactions. Using LEFT JOIN we can filter these out of the whole transaction set:
\nSELECT\n transactions.*\nFROM\n transactions\nLEFT JOIN\n (SELECT\n transactions.id\n FROM\n transactions\n JOIN\n (SELECT t2.*,\n MAX(t1.date_time) AS prev_date\n FROM transactions t1\n JOIN transactions t2\n ON (t1.account = t2.account\n AND t2.date_time > t1.date_time)\n GROUP BY t2.account,t2.date_time) t3\n ON t3.account = transactions.account\n AND t3.prev_date = transactions.date_time\n AND t3.type='A'\n AND transactions.type='R'\n AND t3.amount + transactions.amount = 0) t4\n USING(id)\n WHERE t4.id IS NULL\n AND transactions.type = 'R'\n ORDER BY transactions.date_time;\n\n+----+---------+------+---------------------+--------+\n| id | account | type | date_time | amount |\n+----+---------+------+---------------------+--------+\n| 1 | 1 | R | 2012-01-01 10:01:00 | 1000 |\n| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 |\n| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 |\n| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 |\n| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 |\n| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 |\n+----+---------+------+---------------------+--------+\n
\n
soup wrap:
Here is a solution based on nested subqueries. First, I added a few rows to catch a few more cases. Transaction 10, for example, should not be cancelled by transaction 12, because transaction 11 comes in between.
> select * from transactions order by date_time;
+----+---------+------+---------------------+--------+
| id | account | type | date_time | amount |
+----+---------+------+---------------------+--------+
| 1 | 1 | R | 2012-01-01 10:01:00 | 1000 |
| 2 | 3 | R | 2012-01-02 12:53:10 | 1500 |
| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 |
| 4 | 2 | R | 2012-01-03 17:56:00 | 2000 |
| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 |
| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 |
| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 |
| 8 | 3 | R | 2012-01-05 12:12:00 | 1250 |
| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 |
| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 |
| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 |
| 12 | 3 | A | 2012-01-08 00:00:00 | -1250 |
| 14 | 2 | R | 2012-01-09 00:00:00 | 2000 |
| 13 | 3 | A | 2012-01-10 00:00:00 | -1500 |
| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 |
| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 |
+----+---------+------+---------------------+--------+
16 rows in set (0.00 sec)
First, create a query to grab, for each transaction, "the date of the most recent transaction before that one in the same account":
SELECT t2.*,
MAX(t1.date_time) AS prev_date
FROM transactions t1
JOIN transactions t2
ON (t1.account = t2.account
AND t2.date_time > t1.date_time)
GROUP BY t2.account,t2.date_time
ORDER BY t2.date_time;
+----+---------+------+---------------------+--------+---------------------+
| id | account | type | date_time | amount | prev_date |
+----+---------+------+---------------------+--------+---------------------+
| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 | 2012-01-02 12:53:10 |
| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 | 2012-01-01 10:01:00 |
| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 | 2012-01-03 17:56:00 |
| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 | 2012-01-03 13:10:01 |
| 8 | 3 | R | 2012-01-05 12:12:00 | 1250 | 2012-01-04 15:13:10 |
| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 | 2012-01-05 12:12:00 |
| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 | 2012-01-06 17:24:01 |
| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 | 2012-01-07 00:00:00 |
| 12 | 3 | A | 2012-01-08 00:00:00 | -1250 | 2012-01-07 05:00:00 |
| 14 | 2 | R | 2012-01-09 00:00:00 | 2000 | 2012-01-04 13:23:01 |
| 13 | 3 | A | 2012-01-10 00:00:00 | -1500 | 2012-01-08 00:00:00 |
| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 | 2012-01-09 00:00:00 |
| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 | 2012-01-11 04:00:00 |
+----+---------+------+---------------------+--------+---------------------+
13 rows in set (0.00 sec)
Use that as a subquery to get each transaction and its predecessor on the same row. Use some filtering to pull out the transactions we're interested in - namely, 'A' transactions whose predecessors are 'R' transactions that they exactly cancel out -
SELECT
t3.*,transactions.*
FROM
transactions
JOIN
(SELECT t2.*,
MAX(t1.date_time) AS prev_date
FROM transactions t1
JOIN transactions t2
ON (t1.account = t2.account
AND t2.date_time > t1.date_time)
GROUP BY t2.account,t2.date_time) t3
ON t3.account = transactions.account
AND t3.prev_date = transactions.date_time
AND t3.type='A'
AND transactions.type='R'
AND t3.amount + transactions.amount = 0
ORDER BY t3.date_time;
+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
| id | account | type | date_time | amount | prev_date | id | account | type | date_time | amount |
+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
| 3 | 3 | A | 2012-01-03 13:10:01 | -1500 | 2012-01-02 12:53:10 | 2 | 3 | R | 2012-01-02 12:53:10 | 1500 |
| 6 | 2 | A | 2012-01-04 13:23:01 | -2000 | 2012-01-03 17:56:00 | 4 | 2 | R | 2012-01-03 17:56:00 | 2000 |
| 9 | 3 | A | 2012-01-06 17:24:01 | -1250 | 2012-01-05 12:12:00 | 8 | 3 | R | 2012-01-05 12:12:00 | 1250 |
| 15 | 2 | A | 2012-01-11 04:00:00 | -2000 | 2012-01-09 00:00:00 | 14 | 2 | R | 2012-01-09 00:00:00 | 2000 |
+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
4 rows in set (0.00 sec)
From the result above it's apparent we're almost there - we've identified the unwanted transactions. Using LEFT JOIN we can filter these out of the whole transaction set:
SELECT
transactions.*
FROM
transactions
LEFT JOIN
(SELECT
transactions.id
FROM
transactions
JOIN
(SELECT t2.*,
MAX(t1.date_time) AS prev_date
FROM transactions t1
JOIN transactions t2
ON (t1.account = t2.account
AND t2.date_time > t1.date_time)
GROUP BY t2.account,t2.date_time) t3
ON t3.account = transactions.account
AND t3.prev_date = transactions.date_time
AND t3.type='A'
AND transactions.type='R'
AND t3.amount + transactions.amount = 0) t4
USING(id)
WHERE t4.id IS NULL
AND transactions.type = 'R'
ORDER BY transactions.date_time;
+----+---------+------+---------------------+--------+
| id | account | type | date_time | amount |
+----+---------+------+---------------------+--------+
| 1 | 1 | R | 2012-01-01 10:01:00 | 1000 |
| 5 | 1 | R | 2012-01-04 12:30:01 | 1000 |
| 7 | 3 | R | 2012-01-04 15:13:10 | 3000 |
| 10 | 3 | R | 2012-01-07 00:00:00 | 1250 |
| 11 | 3 | R | 2012-01-07 05:00:00 | 4000 |
| 16 | 2 | R | 2012-01-12 00:00:00 | 5000 |
+----+---------+------+---------------------+--------+
qid & accept id:
(9518900, 9519129)
query:
how to find teams with sql command
soup:
I know there is nothing like ROW_NUMBER() OVER... in SQLite, but I cannot find anything about something similar to a CROSS APPLY.
\nIf there is something equivalent to a CROSS APPLY, then you can do the following. (EDIT: I noticed the requirement for schools to be able to have multiple teams. This solution would only work with one team per school. You will need a recursive CTE and ROW_NUMBER as far as I can tell, otherwise---which are not available in SQLite to my knowledge)
\nSELECT TeamTable.*\nFROM Table\nCROSS APPLY\n (\n SELECT TOP 4 *\n FROM Table AS InnerTable\n WHERE InnerTable.school = Table.School\n ORDER BY InnerTable.Pos\n ) AS TeamTable\n
\nIf not, then you would probably have to use a while loop and temp tables to fill this. If that is the case, then there is no real gain from using the SQL and I would suggest going the code route.
\nEDIT:\nHowever, this is the temp table solution as was requested. You need the inner while since you could have multiple teams within the school (something I had disregarded before and makes the CROSS APPLY solution not work without a recursive CTE and ROW_NUMBER, which has been edited to acknowledge)
\nCREATE TABLE #SchoolList \n (Id INT IDENTITY(1,1), School VARCHAR(50))\n\nINSERT INTO #SchoolList\nSELECT DISTINCT School\nFROM TeamTable\n\nCREATE TABLE #TeamList\n (TeamNumber INT IDENTITY(1,1), Pos INT, Name VARCHAR(50),\n School VARCHAR(50))\n\nDECLARE @CurrentSchool VARCHAR(50), @CurrentSchoolPos INT\nDECLARE @CurrentSchoolLookupId INT\nSET @CurrentSchoolId = 1\nWHILE EXISTS (SELECT 1 FROM #SchoolList WHERE Id > @CurrentSchoolLookupId)\nBEGIN\n SELECT @CurrentSchool = School FROM #SchoolList\n WHERE Id = @CurrentSchoolLookupId\n SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable \n WHERE School = @CurrentSchool \n ORDER BY POS\n WHILE ISNULL(@CurrentSchoolPos, 0) > 0\n BEGIN\n INSERT INTO #TeamList\n SELECT Pos, Name, School \n FROM TeamTable \n WHERE School = @CurrentSchool AND Pos = @CurrentSchoolPos\n\n SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable \n WHERE School = @CurrentSchool \n AND Pos > @CurrentSchoolPos ORDER BY POS\n END\n SET @CurrentSchoolLookupId = @CurrentSchoolLookupId + 1\nEND\n\nSELECT * FROM #TeamList\n
\n
soup wrap:
I know there is nothing like ROW_NUMBER() OVER... in SQLite, but I cannot find anything about something similar to a CROSS APPLY.
If there is something equivalent to a CROSS APPLY, then you can do the following. (EDIT: I noticed the requirement for schools to be able to have multiple teams. This solution would only work with one team per school. You will need a recursive CTE and ROW_NUMBER as far as I can tell, otherwise---which are not available in SQLite to my knowledge)
SELECT TeamTable.*
FROM Table
CROSS APPLY
(
SELECT TOP 4 *
FROM Table AS InnerTable
WHERE InnerTable.school = Table.School
ORDER BY InnerTable.Pos
) AS TeamTable
If not, then you would probably have to use a while loop and temp tables to fill this. If that is the case, then there is no real gain from using the SQL and I would suggest going the code route.
EDIT:
However, this is the temp table solution as was requested. You need the inner while since you could have multiple teams within the school (something I had disregarded before and makes the CROSS APPLY solution not work without a recursive CTE and ROW_NUMBER, which has been edited to acknowledge)
CREATE TABLE #SchoolList
(Id INT IDENTITY(1,1), School VARCHAR(50))
INSERT INTO #SchoolList
SELECT DISTINCT School
FROM TeamTable
CREATE TABLE #TeamList
(TeamNumber INT IDENTITY(1,1), Pos INT, Name VARCHAR(50),
School VARCHAR(50))
DECLARE @CurrentSchool VARCHAR(50), @CurrentSchoolPos INT
DECLARE @CurrentSchoolLookupId INT
SET @CurrentSchoolId = 1
WHILE EXISTS (SELECT 1 FROM #SchoolList WHERE Id > @CurrentSchoolLookupId)
BEGIN
SELECT @CurrentSchool = School FROM #SchoolList
WHERE Id = @CurrentSchoolLookupId
SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable
WHERE School = @CurrentSchool
ORDER BY POS
WHILE ISNULL(@CurrentSchoolPos, 0) > 0
BEGIN
INSERT INTO #TeamList
SELECT Pos, Name, School
FROM TeamTable
WHERE School = @CurrentSchool AND Pos = @CurrentSchoolPos
SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable
WHERE School = @CurrentSchool
AND Pos > @CurrentSchoolPos ORDER BY POS
END
SET @CurrentSchoolLookupId = @CurrentSchoolLookupId + 1
END
SELECT * FROM #TeamList
qid & accept id:
(9535224, 9535281)
query:
Concatenate Two Values On Insert - SQL
soup:
You are currently using double quotes you should instead use single quotes since that is a valid string in SQL.
\n DOSQL "INSERT INTO Leads (DateTimeField) VALUES (cbdate1 + ' ' + cbtime1)"\n
\nEdit:
\nNow if you get further problems it might be because your DateTimeField is a datetime datatype. Now you could then after concatenating convert or cast the string to the correct format.
\nLike:
\n DOSQL "INSERT INTO Leads (DateTimeField) VALUES (Convert(datetime, cbdate1 + ' ' + cbtime1))"\n
\nEdit #2:
\nWithout a 24 hour part you would need a mon dd yyyy format ex: Oct 22 2012. Otherwise you might have to try and get the time part into a 24 hour format.
\n
soup wrap:
You are currently using double quotes you should instead use single quotes since that is a valid string in SQL.
DOSQL "INSERT INTO Leads (DateTimeField) VALUES (cbdate1 + ' ' + cbtime1)"
Edit:
Now if you get further problems it might be because your DateTimeField is a datetime datatype. Now you could then after concatenating convert or cast the string to the correct format.
Like:
DOSQL "INSERT INTO Leads (DateTimeField) VALUES (Convert(datetime, cbdate1 + ' ' + cbtime1))"
Edit #2:
Without a 24 hour part you would need a mon dd yyyy format ex: Oct 22 2012. Otherwise you might have to try and get the time part into a 24 hour format.
qid & accept id:
(9548686, 9548717)
query:
query inside of query
soup:
first join things up.
\nselect q.question_id, q.title\nfrom question q, post p\nwhere q.question_id = p.question_id\n
\nthen filter down to the posts you want
\nselect q.question_id, q.title\nfrom question q, post p\nwhere q.question_id = p.question_id\nand p.post like '%SEARCHTERM%'\n
\n(or full text or whatever)
\nthen count up
\nselect q.question_id, q.title, count( post_id )\nfrom question q, post p\nwhere q.question_id = p.question_id\nand p.post like '%SEARCHTERM%'\ngroup by q.question_id, q.title\n
\n
soup wrap:
first join things up.
select q.question_id, q.title
from question q, post p
where q.question_id = p.question_id
then filter down to the posts you want
select q.question_id, q.title
from question q, post p
where q.question_id = p.question_id
and p.post like '%SEARCHTERM%'
(or full text or whatever)
then count up
select q.question_id, q.title, count( post_id )
from question q, post p
where q.question_id = p.question_id
and p.post like '%SEARCHTERM%'
group by q.question_id, q.title
qid & accept id:
(9573470, 9573531)
query:
MySQL Selecting from One table into Another Based on ID
soup:
You can use either a subquery (SQLize):
\nUPDATE Table1\nSET Val2 = ( SELECT Val1 FROM Table2 WHERE Table1.ID = Table2.ID )\nWHERE Val2 IS NULL\n
\nor a multi-table update (SQLize):
\nUPDATE Table1, Table2\nSET Table1.Val2 = Table2.Val1\nWHERE Table1.ID = Table2.ID AND Table1.Val2 IS NULL\n
\nor the same with an explicit JOIN (SQLize):
\nUPDATE Table1 JOIN Table2 ON Table1.ID = Table2.ID\nSET Table1.Val2 = Table2.Val1\nWHERE Table1.Val2 IS NULL\n
\n(I assume you only want to update the rows in Table1 for which Val2 is NULL. If you'd rather overwrite the values for all rows with matching IDs in Table2, just remove the WHERE Table1.Val2 IS NULL condition.)
\n
soup wrap:
You can use either a subquery (SQLize):
UPDATE Table1
SET Val2 = ( SELECT Val1 FROM Table2 WHERE Table1.ID = Table2.ID )
WHERE Val2 IS NULL
or a multi-table update (SQLize):
UPDATE Table1, Table2
SET Table1.Val2 = Table2.Val1
WHERE Table1.ID = Table2.ID AND Table1.Val2 IS NULL
or the same with an explicit JOIN (SQLize):
UPDATE Table1 JOIN Table2 ON Table1.ID = Table2.ID
SET Table1.Val2 = Table2.Val1
WHERE Table1.Val2 IS NULL
(I assume you only want to update the rows in Table1 for which Val2 is NULL. If you'd rather overwrite the values for all rows with matching IDs in Table2, just remove the WHERE Table1.Val2 IS NULL condition.)
qid & accept id:
(9581458, 9583374)
query:
How can I prevent date overlaps in SQL?
soup:
Consider this query:
\nSELECT *\nFROM Hire AS H1, Hire AS H2\nWHERE H1.carId = H2.carId\nAND H1.hireId < H2.hireId \nAND \n CASE \n WHEN H1.onHireDate > H2.onHireDate THEN H1.onHireDate \n ELSE H2.onHireDate END\n <\n CASE \n WHEN H1.offHireDate > H2.offHireDate THEN H2.offHireDate \n ELSE H1.offHireDate END\n
\nIf all rows meet you business rule then this query will be the empty set (assuming closed-open representation of periods i.e. where the end date is the earliest time granule that is not considered within the period).
\nBecause SQL Server does not support subqueries within CHECK constraints, put the same logic in a trigger (but not an INSTEAD OF trigger, unless you can provide logic to resolve overlaps).
\n
\nAlternative query using Fowler:
\nSELECT *\n FROM Hire AS H1, Hire AS H2\n WHERE H1.carId = H2.carId\n AND H1.hireId < H2.hireId \n AND H1.onHireDate < H2.offHireDate \n AND H2.onHireDate < H1.offHireDate;\n
\n
soup wrap:
Consider this query:
SELECT *
FROM Hire AS H1, Hire AS H2
WHERE H1.carId = H2.carId
AND H1.hireId < H2.hireId
AND
CASE
WHEN H1.onHireDate > H2.onHireDate THEN H1.onHireDate
ELSE H2.onHireDate END
<
CASE
WHEN H1.offHireDate > H2.offHireDate THEN H2.offHireDate
ELSE H1.offHireDate END
If all rows meet you business rule then this query will be the empty set (assuming closed-open representation of periods i.e. where the end date is the earliest time granule that is not considered within the period).
Because SQL Server does not support subqueries within CHECK constraints, put the same logic in a trigger (but not an INSTEAD OF trigger, unless you can provide logic to resolve overlaps).
Alternative query using Fowler:
SELECT *
FROM Hire AS H1, Hire AS H2
WHERE H1.carId = H2.carId
AND H1.hireId < H2.hireId
AND H1.onHireDate < H2.offHireDate
AND H2.onHireDate < H1.offHireDate;
qid & accept id:
(9623187, 9626026)
query:
Best way to replicate Oracles range windowing function in SQL Server
soup:
If I understand correct, you want the following
\nFor each case_id, channel_index combination:
\n\n- Find the lowest MAX value for all 3 minute windows (min sustained\nvalue)
\n- Find the highest MIN value for all 3 minutes windows (max\nsustained value).
\n- Use data from the preceeding 3 minutes. If 3 minutes has not elapsed since the first (MIN)
start_time value, exclude that data. \n
\nThere are still several unexplained differences between the Oracle query and your solution (both the stored procedure and CLR stored procedure):
\n\n- The Oracle query doesn't ensure the time difference for each window is exactly 3 minutes. It only takes the min/max value for the preceeding 3 minutes. The WHERE clause
first_time + numtodsinterval(3, 'minute') <= start_time removes the time windows before the first 3 minutes has elapsed. \n- The
value_duration column is in the sample data, but not used in the solution \n- The sample data does not include 3 minutes of data, so I changed the time range to 10 seconds
\n- You did not list the expected results for the sample data
\n
\nSOLUTION\n-- This may not be the fastest solution, but it should work --
\nStep 0: Window Time Range -- The sample data does not include 3 minutes of data, so I used a variable to hold the desired number of seconds for the window time range. For the actual data, you could use 180 seconds.
\nDECLARE @seconds int\nSET @seconds = 10\n
\nStep 1: First Time -- Although the first_time isn't important, it is still necessary to make sure we don't include incomplete time periods. It will be used later to exclude data before the first complete time period has elapsed.
\n-- Query to return the first_time, last_time, and range_time\n-- range_time is first complete time period using the time range\nSELECT case_id \n , channel_index \n , MIN(start_time) AS first_time\n , DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n , MAX(start_time) AS last_time\nFROM #continuous_data \nGROUP BY case_id, channel_index\nORDER BY case_id, channel_index\n\n-- Results from the sample data\ncase_id channel_index first_time range_time last_time\n----------- ------------- ----------------------- ----------------------- -----------------------\n2081 50 2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 2011-05-18 09:37:08.000\n2081 51 2011-05-18 09:36:34.000 2011-05-18 09:36:44.000 2011-05-18 09:37:04.000\n
\nStep 2: Time Windows -- The Oracle query uses partition by case_id, channel_index order by start_time range numtodsinterval(3, 'minute') preceeding to find the minimum and maximum dms_value as well as the first_time in the subquery. Since SQL Server does not have the range functionality, you need to use a subquery to define the 3 minute windows. The Oracle query uses range ... preceeding, so the SQL Server range will use DATEADD with a negative value:
\n-- Windowing for each time range. Window is the negative time\n-- range from each start_time row\nSELECT case_id \n , channel_index \n , DATEADD(ss, -@seconds, start_time) AS window_start\n , start_time AS window_end\nFROM #continuous_data \nORDER BY case_id, channel_index, start_time\n
\nStep 3: MIN/MAX for Time Windows -- Next you need to find the minimum and maximum values for each window. This is where the majority of the calculation is performed and needs the most debugging to get the expected results.
\n-- Find the maximum and minimum values for each window range\n-- I included the start_time min/max/diff for debugging\nSELECT su.case_id \n , su.channel_index \n , win.window_end \n , MAX(dms_value) AS dms_max\n , MIN(dms_value) AS dms_min\n , MIN(su.start_time) AS time_min\n , MAX(su.start_time) AS time_max\n , DATEDIFF(ss, MIN(su.start_time), MAX(su.start_time)) AS time_diff\nFROM #continuous_data AS su\n JOIN (\n -- Windowing for each time range. Window is the negative time\n -- range from each start_time row\n SELECT case_id \n , channel_index \n , DATEADD(ss, -@seconds, start_time) AS window_start\n , start_time AS window_end\n FROM #continuous_data \n ) AS win\n ON ( su.case_id = win.case_id\n AND su.channel_index = win.channel_index)\n JOIN (\n -- Find the first_time and add the time range\n SELECT case_id \n , channel_index \n , MIN(start_time) AS first_time\n , DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n FROM #continuous_data \n GROUP BY case_id, channel_index\n ) AS fir\n ON ( su.case_id = fir.case_id\n AND su.channel_index = fir.channel_index)\nWHERE su.start_time BETWEEN win.window_start AND win.window_end\n AND win.window_end >= fir.range_time\nGROUP BY su.case_id, su.channel_index, win.window_end\nORDER BY su.case_id, su.channel_index, win.window_end\n\n-- Results from sample data:\ncase_id channel_index window_end dms_max dms_min time_min time_max time_diff\n----------- ------------- ----------------------- ---------------------- ---------------------- ----------------------- ----------------------- -----------\n2081 50 2011-05-18 09:36:49.000 104.5625 94.8125 2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 10\n2081 50 2011-05-18 09:36:50.000 105.8125 95.4375 2011-05-18 09:36:40.000 2011-05-18 09:36:50.000 10\n2081 50 2011-05-18 09:36:52.000 107.125 98.0625 2011-05-18 09:36:42.000 2011-05-18 09:36:52.000 10\n2081 50 2011-05-18 09:36:53.000 108.4375 99.3125 2011-05-18 09:36:44.000 2011-05-18 09:36:53.000 9\n2081 50 2011-05-18 09:36:54.000 109.75 99.3125 2011-05-18 09:36:44.000 2011-05-18 09:36:54.000 10\n2081 50 2011-05-18 09:36:55.000 111.0625 100.625 2011-05-18 09:36:45.000 2011-05-18 09:36:55.000 10\n2081 50 2011-05-18 09:36:57.000 112.3125 103.25 2011-05-18 09:36:48.000 2011-05-18 09:36:57.000 9\n2081 50 2011-05-18 09:36:58.000 113.625 103.25 2011-05-18 09:36:48.000 2011-05-18 09:36:58.000 10\n2081 50 2011-05-18 09:36:59.000 114.9375 104.5625 2011-05-18 09:36:49.000 2011-05-18 09:36:59.000 10\n2081 50 2011-05-18 09:37:01.000 116.25 107.125 2011-05-18 09:36:52.000 2011-05-18 09:37:01.000 9\n2081 50 2011-05-18 09:37:02.000 117.5 107.125 2011-05-18 09:36:52.000 2011-05-18 09:37:02.000 10\n2081 50 2011-05-18 09:37:03.000 118.8125 108.4375 2011-05-18 09:36:53.000 2011-05-18 09:37:03.000 10\n2081 50 2011-05-18 09:37:05.000 120.125 111.0625 2011-05-18 09:36:55.000 2011-05-18 09:37:05.000 10\n2081 50 2011-05-18 09:37:06.000 121.4375 112.3125 2011-05-18 09:36:57.000 2011-05-18 09:37:06.000 9\n2081 50 2011-05-18 09:37:07.000 122.75 112.3125 2011-05-18 09:36:57.000 2011-05-18 09:37:07.000 10\n2081 50 2011-05-18 09:37:08.000 124.0625 113.625 2011-05-18 09:36:58.000 2011-05-18 09:37:08.000 10\n2081 51 2011-05-18 09:36:46.000 98 96 2011-05-18 09:36:40.000 2011-05-18 09:36:46.000 6\n2081 51 2011-05-18 09:36:52.000 98 92 2011-05-18 09:36:46.000 2011-05-18 09:36:52.000 6\n2081 51 2011-05-18 09:36:58.000 92 86 2011-05-18 09:36:52.000 2011-05-18 09:36:58.000 6\n2081 51 2011-05-18 09:37:04.000 86 80 2011-05-18 09:36:58.000 2011-05-18 09:37:04.000 6\n
\nStep 4: Finally, you can put it all together to return the lowest MAX value and highest MIN value for each time window:
\nSELECT su.case_id \n , su.channel_index \n , MIN(dms_max) AS su_min\n , MAX(dms_min) AS su_max\nFROM (\n SELECT su.case_id \n , su.channel_index \n , win.window_end \n , MAX(dms_value) AS dms_max\n , MIN(dms_value) AS dms_min\n FROM #continuous_data AS su\n JOIN (\n -- Windowing for each time range. Window is the negative time\n -- range from each start_time row\n SELECT case_id \n , channel_index \n , DATEADD(ss, -@seconds, start_time) AS window_start\n , start_time AS window_end\n FROM #continuous_data \n ) AS win\n ON ( su.case_id = win.case_id\n AND su.channel_index = win.channel_index)\n JOIN (\n -- Find the first_time and add the time range\n SELECT case_id \n , channel_index \n , MIN(start_time) AS first_time\n , DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n FROM #continuous_data \n GROUP BY case_id, channel_index\n ) AS fir\n ON ( su.case_id = fir.case_id\n AND su.channel_index = fir.channel_index)\n WHERE su.start_time BETWEEN win.window_start AND win.window_end\n AND win.window_end >= fir.range_time\n GROUP BY su.case_id, su.channel_index, win.window_end\n) AS su\nGROUP BY su.case_id, su.channel_index\nORDER BY su.case_id, su.channel_index\n\n-- Results from sample data:\ncase_id channel_index su_min su_max\n----------- ------------- ---------------------- ----------------------\n2081 50 104.5625 113.625\n2081 51 86 96\n
\n
soup wrap:
If I understand correct, you want the following
For each case_id, channel_index combination:
- Find the lowest MAX value for all 3 minute windows (min sustained
value)
- Find the highest MIN value for all 3 minutes windows (max
sustained value).
- Use data from the preceeding 3 minutes. If 3 minutes has not elapsed since the first (MIN)
start_time value, exclude that data.
There are still several unexplained differences between the Oracle query and your solution (both the stored procedure and CLR stored procedure):
- The Oracle query doesn't ensure the time difference for each window is exactly 3 minutes. It only takes the min/max value for the preceeding 3 minutes. The WHERE clause
first_time + numtodsinterval(3, 'minute') <= start_time removes the time windows before the first 3 minutes has elapsed.
- The
value_duration column is in the sample data, but not used in the solution
- The sample data does not include 3 minutes of data, so I changed the time range to 10 seconds
- You did not list the expected results for the sample data
SOLUTION
-- This may not be the fastest solution, but it should work --
Step 0: Window Time Range -- The sample data does not include 3 minutes of data, so I used a variable to hold the desired number of seconds for the window time range. For the actual data, you could use 180 seconds.
DECLARE @seconds int
SET @seconds = 10
Step 1: First Time -- Although the first_time isn't important, it is still necessary to make sure we don't include incomplete time periods. It will be used later to exclude data before the first complete time period has elapsed.
-- Query to return the first_time, last_time, and range_time
-- range_time is first complete time period using the time range
SELECT case_id
, channel_index
, MIN(start_time) AS first_time
, DATEADD(ss, @seconds, MIN(start_time)) AS range_time
, MAX(start_time) AS last_time
FROM #continuous_data
GROUP BY case_id, channel_index
ORDER BY case_id, channel_index
-- Results from the sample data
case_id channel_index first_time range_time last_time
----------- ------------- ----------------------- ----------------------- -----------------------
2081 50 2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 2011-05-18 09:37:08.000
2081 51 2011-05-18 09:36:34.000 2011-05-18 09:36:44.000 2011-05-18 09:37:04.000
Step 2: Time Windows -- The Oracle query uses partition by case_id, channel_index order by start_time range numtodsinterval(3, 'minute') preceeding to find the minimum and maximum dms_value as well as the first_time in the subquery. Since SQL Server does not have the range functionality, you need to use a subquery to define the 3 minute windows. The Oracle query uses range ... preceeding, so the SQL Server range will use DATEADD with a negative value:
-- Windowing for each time range. Window is the negative time
-- range from each start_time row
SELECT case_id
, channel_index
, DATEADD(ss, -@seconds, start_time) AS window_start
, start_time AS window_end
FROM #continuous_data
ORDER BY case_id, channel_index, start_time
Step 3: MIN/MAX for Time Windows -- Next you need to find the minimum and maximum values for each window. This is where the majority of the calculation is performed and needs the most debugging to get the expected results.
-- Find the maximum and minimum values for each window range
-- I included the start_time min/max/diff for debugging
SELECT su.case_id
, su.channel_index
, win.window_end
, MAX(dms_value) AS dms_max
, MIN(dms_value) AS dms_min
, MIN(su.start_time) AS time_min
, MAX(su.start_time) AS time_max
, DATEDIFF(ss, MIN(su.start_time), MAX(su.start_time)) AS time_diff
FROM #continuous_data AS su
JOIN (
-- Windowing for each time range. Window is the negative time
-- range from each start_time row
SELECT case_id
, channel_index
, DATEADD(ss, -@seconds, start_time) AS window_start
, start_time AS window_end
FROM #continuous_data
) AS win
ON ( su.case_id = win.case_id
AND su.channel_index = win.channel_index)
JOIN (
-- Find the first_time and add the time range
SELECT case_id
, channel_index
, MIN(start_time) AS first_time
, DATEADD(ss, @seconds, MIN(start_time)) AS range_time
FROM #continuous_data
GROUP BY case_id, channel_index
) AS fir
ON ( su.case_id = fir.case_id
AND su.channel_index = fir.channel_index)
WHERE su.start_time BETWEEN win.window_start AND win.window_end
AND win.window_end >= fir.range_time
GROUP BY su.case_id, su.channel_index, win.window_end
ORDER BY su.case_id, su.channel_index, win.window_end
-- Results from sample data:
case_id channel_index window_end dms_max dms_min time_min time_max time_diff
----------- ------------- ----------------------- ---------------------- ---------------------- ----------------------- ----------------------- -----------
2081 50 2011-05-18 09:36:49.000 104.5625 94.8125 2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 10
2081 50 2011-05-18 09:36:50.000 105.8125 95.4375 2011-05-18 09:36:40.000 2011-05-18 09:36:50.000 10
2081 50 2011-05-18 09:36:52.000 107.125 98.0625 2011-05-18 09:36:42.000 2011-05-18 09:36:52.000 10
2081 50 2011-05-18 09:36:53.000 108.4375 99.3125 2011-05-18 09:36:44.000 2011-05-18 09:36:53.000 9
2081 50 2011-05-18 09:36:54.000 109.75 99.3125 2011-05-18 09:36:44.000 2011-05-18 09:36:54.000 10
2081 50 2011-05-18 09:36:55.000 111.0625 100.625 2011-05-18 09:36:45.000 2011-05-18 09:36:55.000 10
2081 50 2011-05-18 09:36:57.000 112.3125 103.25 2011-05-18 09:36:48.000 2011-05-18 09:36:57.000 9
2081 50 2011-05-18 09:36:58.000 113.625 103.25 2011-05-18 09:36:48.000 2011-05-18 09:36:58.000 10
2081 50 2011-05-18 09:36:59.000 114.9375 104.5625 2011-05-18 09:36:49.000 2011-05-18 09:36:59.000 10
2081 50 2011-05-18 09:37:01.000 116.25 107.125 2011-05-18 09:36:52.000 2011-05-18 09:37:01.000 9
2081 50 2011-05-18 09:37:02.000 117.5 107.125 2011-05-18 09:36:52.000 2011-05-18 09:37:02.000 10
2081 50 2011-05-18 09:37:03.000 118.8125 108.4375 2011-05-18 09:36:53.000 2011-05-18 09:37:03.000 10
2081 50 2011-05-18 09:37:05.000 120.125 111.0625 2011-05-18 09:36:55.000 2011-05-18 09:37:05.000 10
2081 50 2011-05-18 09:37:06.000 121.4375 112.3125 2011-05-18 09:36:57.000 2011-05-18 09:37:06.000 9
2081 50 2011-05-18 09:37:07.000 122.75 112.3125 2011-05-18 09:36:57.000 2011-05-18 09:37:07.000 10
2081 50 2011-05-18 09:37:08.000 124.0625 113.625 2011-05-18 09:36:58.000 2011-05-18 09:37:08.000 10
2081 51 2011-05-18 09:36:46.000 98 96 2011-05-18 09:36:40.000 2011-05-18 09:36:46.000 6
2081 51 2011-05-18 09:36:52.000 98 92 2011-05-18 09:36:46.000 2011-05-18 09:36:52.000 6
2081 51 2011-05-18 09:36:58.000 92 86 2011-05-18 09:36:52.000 2011-05-18 09:36:58.000 6
2081 51 2011-05-18 09:37:04.000 86 80 2011-05-18 09:36:58.000 2011-05-18 09:37:04.000 6
Step 4: Finally, you can put it all together to return the lowest MAX value and highest MIN value for each time window:
SELECT su.case_id
, su.channel_index
, MIN(dms_max) AS su_min
, MAX(dms_min) AS su_max
FROM (
SELECT su.case_id
, su.channel_index
, win.window_end
, MAX(dms_value) AS dms_max
, MIN(dms_value) AS dms_min
FROM #continuous_data AS su
JOIN (
-- Windowing for each time range. Window is the negative time
-- range from each start_time row
SELECT case_id
, channel_index
, DATEADD(ss, -@seconds, start_time) AS window_start
, start_time AS window_end
FROM #continuous_data
) AS win
ON ( su.case_id = win.case_id
AND su.channel_index = win.channel_index)
JOIN (
-- Find the first_time and add the time range
SELECT case_id
, channel_index
, MIN(start_time) AS first_time
, DATEADD(ss, @seconds, MIN(start_time)) AS range_time
FROM #continuous_data
GROUP BY case_id, channel_index
) AS fir
ON ( su.case_id = fir.case_id
AND su.channel_index = fir.channel_index)
WHERE su.start_time BETWEEN win.window_start AND win.window_end
AND win.window_end >= fir.range_time
GROUP BY su.case_id, su.channel_index, win.window_end
) AS su
GROUP BY su.case_id, su.channel_index
ORDER BY su.case_id, su.channel_index
-- Results from sample data:
case_id channel_index su_min su_max
----------- ------------- ---------------------- ----------------------
2081 50 104.5625 113.625
2081 51 86 96
qid & accept id:
(9630004, 9630219)
query:
How to decrease the Auto increment _id in android SQLite?
soup:
EDIT: Maybe I should make it clear that just inserting the rows with the correct id instead of manipulating the sequence number definitely is a better idea than the below method. If there's no row with id=3 in the table, you can just insert with a fixed value in the id even in an AUTOINCREMENT table.
\n
\nThat said, if you're really sure, you can set the auto increment value to any value using;
\nUPDATE sqlite_sequence set seq= where name=;\n\nThat is, if you want AUTOINCREMENT on the next insert on table 'TableA' to generate 5, you do;
\nUPDATE sqlite_sequence set seq=4 where name='TableA';\n
\nNote that resetting seq behaves a bit different from what you may expect, it just means that the lowest id generated will be the greater of seq + 1 and the max id still in the table + 1.
\nThat is, if you delete all values >=5, you can reset the sequence value to 4 and have 5 generated as the next sequence number, but if you still have the id 10 in the table, the next number generated will be 11 instead.
\nMaybe I should point out the fact that I cannot find this exact behavior documented anywhere, so I'd not rely on the behavior for every future version of sqlite. It works now, it may not tomorrow.
\n
soup wrap:
EDIT: Maybe I should make it clear that just inserting the rows with the correct id instead of manipulating the sequence number definitely is a better idea than the below method. If there's no row with id=3 in the table, you can just insert with a fixed value in the id even in an AUTOINCREMENT table.
That said, if you're really sure, you can set the auto increment value to any value using;
UPDATE sqlite_sequence set seq= where name=;
That is, if you want AUTOINCREMENT on the next insert on table 'TableA' to generate 5, you do;
UPDATE sqlite_sequence set seq=4 where name='TableA';
Note that resetting seq behaves a bit different from what you may expect, it just means that the lowest id generated will be the greater of seq + 1 and the max id still in the table + 1.
That is, if you delete all values >=5, you can reset the sequence value to 4 and have 5 generated as the next sequence number, but if you still have the id 10 in the table, the next number generated will be 11 instead.
Maybe I should point out the fact that I cannot find this exact behavior documented anywhere, so I'd not rely on the behavior for every future version of sqlite. It works now, it may not tomorrow.
qid & accept id:
(9630859, 9631098)
query:
fetching data from database and set it on edittext
soup:
Check code for database in the following link Android SQLite
\nYou have to store the value in an arraylist and which is retrieved from database and set the value to the edit text as
\n// myarraylist is the arraylist which contains \n// the data retrieved from database\neditText.setText(myarraylist.get(0)); \n
\nAfter the data is retrieved, you have to check the condition whether editText.getText().toString() length is greater then zero you should not allow them to edit the text in editText by using following
\n editText.setFocusable(false);\n
\n
soup wrap:
Check code for database in the following link Android SQLite
You have to store the value in an arraylist and which is retrieved from database and set the value to the edit text as
// myarraylist is the arraylist which contains
// the data retrieved from database
editText.setText(myarraylist.get(0));
After the data is retrieved, you have to check the condition whether editText.getText().toString() length is greater then zero you should not allow them to edit the text in editText by using following
editText.setFocusable(false);
qid & accept id:
(9655852, 9656733)
query:
sum of customer transactions
soup:
Actually, your example is not appropriate or you're missing information about the problem itself. Answer this question: If you want one line including a total what serial number do you want for that line? It is against common sense to have a total with detailed information (as long as you don't specify a criteria such as and also I want the most recent purchase date for each email).
\nAnother way to see this is: What criteria did you apply to select this serial number 1087-7072 instead of 2447-7971for zzz@msn.com? The same questions applies for fields 1 and 3.
\nSo, what I understand it would be useful for you (and minimal, of course) would be this:
\n36.00 T T xxx@gmail.com\n6.00 R T yyy@gmail.com\n46.00 P B zzz@msn.com \n10.00 y a aaa@aol.com\n
\nYou can get this with the following query (based on your table schea, I assume name has those values P B):
\nselect sum(`Purchase Price`) as total_sum, name, email from purchases\nwhere `Purchase Date` between '2012-01-01' and '2012-01-31'\ngroup by email, name\norder by email\n
\nLet me know if this is what you're (actually) looking for.
\n
soup wrap:
Actually, your example is not appropriate or you're missing information about the problem itself. Answer this question: If you want one line including a total what serial number do you want for that line? It is against common sense to have a total with detailed information (as long as you don't specify a criteria such as and also I want the most recent purchase date for each email).
Another way to see this is: What criteria did you apply to select this serial number 1087-7072 instead of 2447-7971for zzz@msn.com? The same questions applies for fields 1 and 3.
So, what I understand it would be useful for you (and minimal, of course) would be this:
36.00 T T xxx@gmail.com
6.00 R T yyy@gmail.com
46.00 P B zzz@msn.com
10.00 y a aaa@aol.com
You can get this with the following query (based on your table schea, I assume name has those values P B):
select sum(`Purchase Price`) as total_sum, name, email from purchases
where `Purchase Date` between '2012-01-01' and '2012-01-31'
group by email, name
order by email
Let me know if this is what you're (actually) looking for.
qid & accept id:
(9704624, 9739296)
query:
Oracle APEX - Saving Shuttle Item selections to a new table
soup:
APEX provides a utility to split the values out of a shuttle item like this:
\ndeclare\n tab apex_application_global.vc_arr2;\nbegin\n tab := apex_util.string_to_table (:p1_multiple_item);\n ...\nend;\n
\nSo for your requirement you could do:
\ndeclare\n tab apex_application_global.vc_arr2;\nbegin\n tab := apex_util.string_to_table (:p1_multiple_item);\n for i in 1..tab.count loop\n insert into order_parts_table (order_number, part_number, order_status)\n values (:p1_order_number, tab(i), 'ACTIVE');\n end loop;\nend;\n
\n(NB I have not dealt with whether the row already exists, but you get the idea.)
\nThe processing for removing items will be along the same lines, though a bit more complicated.
\n
soup wrap:
APEX provides a utility to split the values out of a shuttle item like this:
declare
tab apex_application_global.vc_arr2;
begin
tab := apex_util.string_to_table (:p1_multiple_item);
...
end;
So for your requirement you could do:
declare
tab apex_application_global.vc_arr2;
begin
tab := apex_util.string_to_table (:p1_multiple_item);
for i in 1..tab.count loop
insert into order_parts_table (order_number, part_number, order_status)
values (:p1_order_number, tab(i), 'ACTIVE');
end loop;
end;
(NB I have not dealt with whether the row already exists, but you get the idea.)
The processing for removing items will be along the same lines, though a bit more complicated.
qid & accept id:
(9755681, 9755750)
query:
Use regexp_instr to get the last number in a string
soup:
If you were using 11g, you could use regexp_count to determine the number of times that a pattern exists in the string and feed that into the regexp_instr
\nregexp_instr( str,\n '[[:digit:]]',\n 1,\n regexp_count( str, '[[:digit:]]')\n )\n
\nSince you're on 10g, however, the simplest option is probably to reverse the string and subtract the position that is found from the length of the string
\nlength(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1\n
\nBoth approaches should work in 11g
\nSQL> ed\nWrote file afiedt.buf\n\n 1 with x as (\n 2 select '500 Oracle Parkway, Redwood Shores, CA' str\n 3 from dual\n 4 )\n 5 select length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1,\n 6 regexp_instr( str,\n 7 '[[:digit:]]',\n 8 1,\n 9 regexp_count( str, '[[:digit:]]')\n 10 )\n 11* from x\nSQL> /\n\nLENGTH(STR)-REGEXP_INSTR(REVERSE(STR),'[[:DIGIT:]]')+1\n------------------------------------------------------\nREGEXP_INSTR(STR,'[[:DIGIT:]]',1,REGEXP_COUNT(STR,'[[:DIGIT:]]'))\n-----------------------------------------------------------------\n 3\n 3\n
\n
soup wrap:
If you were using 11g, you could use regexp_count to determine the number of times that a pattern exists in the string and feed that into the regexp_instr
regexp_instr( str,
'[[:digit:]]',
1,
regexp_count( str, '[[:digit:]]')
)
Since you're on 10g, however, the simplest option is probably to reverse the string and subtract the position that is found from the length of the string
length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1
Both approaches should work in 11g
SQL> ed
Wrote file afiedt.buf
1 with x as (
2 select '500 Oracle Parkway, Redwood Shores, CA' str
3 from dual
4 )
5 select length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1,
6 regexp_instr( str,
7 '[[:digit:]]',
8 1,
9 regexp_count( str, '[[:digit:]]')
10 )
11* from x
SQL> /
LENGTH(STR)-REGEXP_INSTR(REVERSE(STR),'[[:DIGIT:]]')+1
------------------------------------------------------
REGEXP_INSTR(STR,'[[:DIGIT:]]',1,REGEXP_COUNT(STR,'[[:DIGIT:]]'))
-----------------------------------------------------------------
3
3
qid & accept id:
(9760884, 9760912)
query:
Hebrew and other languages in sql
soup:
you need to store it as nvarchar and make sure to prefix the text with N
\nexample
\ndeclare @n nchar(1)\nset @n = N'文' \n\nselect @n\nGO\n\ndeclare @n nchar(1)\nset @n = '文' \n\nselect @n\n
\noutput
\n----\n文\n\n(1 row(s) affected)\n\n\n----\n?\n\n(1 row(s) affected)\n
\nThe N before the string value tells SQL Server to treat it as unicode, notice that you get a question mark back when you don't use N?
\nIn terms of searching, take a look at Performance Impacts of Unicode, Equals vs LIKE, and Partially Filled Fixed Width
\n
soup wrap:
you need to store it as nvarchar and make sure to prefix the text with N
example
declare @n nchar(1)
set @n = N'文'
select @n
GO
declare @n nchar(1)
set @n = '文'
select @n
output
----
文
(1 row(s) affected)
----
?
(1 row(s) affected)
The N before the string value tells SQL Server to treat it as unicode, notice that you get a question mark back when you don't use N?
In terms of searching, take a look at Performance Impacts of Unicode, Equals vs LIKE, and Partially Filled Fixed Width
qid & accept id:
(9764030, 9767068)
query:
SQL Server 2008 Prior String Extract
soup:
The following may appear somewhat specific and too assuming, even though it might also look a bit too complicated for a specific and over-assuming solution. Still, I hope it will at least make a good starting point.
\nThese are the assumptions I had to make to avoid complicating the script even further:
\n\nThe values to be extracted never contain a decimal point (are integers).
\nThe values to be extracted are always either preceded by a space or at the beginning of the column value.
\nNeither GB nor MB can possibly be part of anything else than a traffic size (a value to be extracted).
\nNeither GB nor MB is ever preceded by a space.
\nAll the strings are either unique or accompanied by another column or columns that can be used as key values. (My solution, in particular, uses an additional column as a key.)
\n
\nSo, here's my attempt (which did return the expected results for all the sample data provided in the original post):
\nWITH data (id, str) AS (\n SELECT 1, '$15 / 1GB 24m + Intern 120MB' ----------> 1.12 GB\n UNION ALL SELECT 2, '$19.95 / 500MB + $49.95 / 9GB Blackberry' -----> 9.5GB\n UNION ALL SELECT 3, '$174.95 Blackberry 24GB + $10 / 1GB Datapack' ----> 25GB\n UNION ALL SELECT 4, '$79 / 6GB' --> 6GB\n UNION ALL SELECT 5, Null --> Null\n UNION ALL SELECT 6, '$20 Plan' --> 0GB\n UNION ALL SELECT 7, '460MB' --> 0.46GB\n),\nunified AS (\n SELECT\n id,\n oldstr = str,\n str = REPLACE(str, 'GB', '000MB')\n FROM data\n),\nsplit AS (\n SELECT\n id,\n ofs = 0,\n endpos = CHARINDEX('MB', str),\n length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),\n str = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)\n FROM unified\n UNION ALL\n SELECT\n id,\n ofs = NULLIF(endpos, 0) + 1,\n endpos = CHARINDEX('MB', str),\n length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),\n str = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)\n FROM split\n WHERE length > 0\n),\nextracted AS (\n SELECT\n d.id,\n str = d.oldstr,\n mb = CAST(SUBSTRING(d.str, s.ofs + s.endpos - s.length, s.length) AS int)\n FROM unified d\n INNER JOIN split s ON d.id = s.id\n)\nSELECT\n id,\n str,\n gb = RTRIM(CAST(SUM(mb) AS float) / 1000) + 'GB'\nFROM extracted\nGROUP BY id, str\nORDER BY id\n
\nBasically, the idea is first to convert all gigabytes to megabytes, to then be able search and extract only megabyte amounts. The search & extract method involves a recursive CTE and consists essentially of these steps:
\n1) find the position of the first MB;
\n2) find the length of the number immediately before the MB;
\n3) cut off the beginning of the string right at the end of the first MB;
\n4) repeat from Step 1 until no MB is found;
\n5) join the found figures to the original string list to extract the amounts themselves.
\nAfterwards, it only remains for us to group by key values and sum the obtained amounts. Here's the output:
\nid str gb\n-- -------------------------------------------- ------\n1 $15 / 1GB 24m + Intern 120MB 1.12GB\n2 $19.95 / 500MB + $49.95 / 9GB Blackberry 9.5GB\n3 $174.95 Blackberry 24GB + $10 / 1GB Datapack 25GB\n4 $79 / 6GB 6GB\n5 NULL NULL\n6 $20 Plan 0GB\n7 460MB 0.46GB\n
\n
soup wrap:
The following may appear somewhat specific and too assuming, even though it might also look a bit too complicated for a specific and over-assuming solution. Still, I hope it will at least make a good starting point.
These are the assumptions I had to make to avoid complicating the script even further:
The values to be extracted never contain a decimal point (are integers).
The values to be extracted are always either preceded by a space or at the beginning of the column value.
Neither GB nor MB can possibly be part of anything else than a traffic size (a value to be extracted).
Neither GB nor MB is ever preceded by a space.
All the strings are either unique or accompanied by another column or columns that can be used as key values. (My solution, in particular, uses an additional column as a key.)
So, here's my attempt (which did return the expected results for all the sample data provided in the original post):
WITH data (id, str) AS (
SELECT 1, '$15 / 1GB 24m + Intern 120MB' ----------> 1.12 GB
UNION ALL SELECT 2, '$19.95 / 500MB + $49.95 / 9GB Blackberry' -----> 9.5GB
UNION ALL SELECT 3, '$174.95 Blackberry 24GB + $10 / 1GB Datapack' ----> 25GB
UNION ALL SELECT 4, '$79 / 6GB' --> 6GB
UNION ALL SELECT 5, Null --> Null
UNION ALL SELECT 6, '$20 Plan' --> 0GB
UNION ALL SELECT 7, '460MB' --> 0.46GB
),
unified AS (
SELECT
id,
oldstr = str,
str = REPLACE(str, 'GB', '000MB')
FROM data
),
split AS (
SELECT
id,
ofs = 0,
endpos = CHARINDEX('MB', str),
length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),
str = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)
FROM unified
UNION ALL
SELECT
id,
ofs = NULLIF(endpos, 0) + 1,
endpos = CHARINDEX('MB', str),
length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),
str = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)
FROM split
WHERE length > 0
),
extracted AS (
SELECT
d.id,
str = d.oldstr,
mb = CAST(SUBSTRING(d.str, s.ofs + s.endpos - s.length, s.length) AS int)
FROM unified d
INNER JOIN split s ON d.id = s.id
)
SELECT
id,
str,
gb = RTRIM(CAST(SUM(mb) AS float) / 1000) + 'GB'
FROM extracted
GROUP BY id, str
ORDER BY id
Basically, the idea is first to convert all gigabytes to megabytes, to then be able search and extract only megabyte amounts. The search & extract method involves a recursive CTE and consists essentially of these steps:
1) find the position of the first MB;
2) find the length of the number immediately before the MB;
3) cut off the beginning of the string right at the end of the first MB;
4) repeat from Step 1 until no MB is found;
5) join the found figures to the original string list to extract the amounts themselves.
Afterwards, it only remains for us to group by key values and sum the obtained amounts. Here's the output:
id str gb
-- -------------------------------------------- ------
1 $15 / 1GB 24m + Intern 120MB 1.12GB
2 $19.95 / 500MB + $49.95 / 9GB Blackberry 9.5GB
3 $174.95 Blackberry 24GB + $10 / 1GB Datapack 25GB
4 $79 / 6GB 6GB
5 NULL NULL
6 $20 Plan 0GB
7 460MB 0.46GB
qid & accept id:
(9777457, 9777799)
query:
DB schema, many-many or bool values in table
soup:
Another suggestion is that you use a linker table. This would be more maintainable and easily documented. A linker table is used when you have a many-to-many relationship. (A restaurant can have many types of menu, and a particular type of menu can be utilized by many restaurants.)
\nThis lets you add additional menu types as a row in a "menu_types" table later, without changing the structure of any table.
\nIt does make your queries somewhat more complicated, though, as you have to perform some joins.
\nFirst, you would have three tables something like this:
\nrestaurants\n---------------\nid name\n1 Moe's\n2 Steak & Shrimp House\n3 McDonald's\n\nrestaurant_menus\n----------------\nrestaurant_id menu_type\n1 1\n1 3\n2 4\n3 1\n3 3\n3 4\n\nmenu_types\n---------------\nid type\n1 Breakfast\n2 Brunch\n3 Lunch\n4 Dinner\n
\nSo, to see what kind of menus each restaurant offers, your query goes like this:
\nSELECT r.name, mt.type\nFROM restaurants r\n JOIN restaurant_menus rm\n ON (r.id = rm.restaurant_id)\n JOIN menu_types mt\n ON (rm.menu_type = mt.id)\nORDER BY r.name ASC;\n
\nThis would produce:
\nname type \n-------------------- -----------\nMcDonald's Lunch \nMcDonald's Breakfast \nMcDonald's Dinner \nMoe's Breakfast \nMoe's Lunch \nSteak & Shrimp House Dinner \n
\n
soup wrap:
Another suggestion is that you use a linker table. This would be more maintainable and easily documented. A linker table is used when you have a many-to-many relationship. (A restaurant can have many types of menu, and a particular type of menu can be utilized by many restaurants.)
This lets you add additional menu types as a row in a "menu_types" table later, without changing the structure of any table.
It does make your queries somewhat more complicated, though, as you have to perform some joins.
First, you would have three tables something like this:
restaurants
---------------
id name
1 Moe's
2 Steak & Shrimp House
3 McDonald's
restaurant_menus
----------------
restaurant_id menu_type
1 1
1 3
2 4
3 1
3 3
3 4
menu_types
---------------
id type
1 Breakfast
2 Brunch
3 Lunch
4 Dinner
So, to see what kind of menus each restaurant offers, your query goes like this:
SELECT r.name, mt.type
FROM restaurants r
JOIN restaurant_menus rm
ON (r.id = rm.restaurant_id)
JOIN menu_types mt
ON (rm.menu_type = mt.id)
ORDER BY r.name ASC;
This would produce:
name type
-------------------- -----------
McDonald's Lunch
McDonald's Breakfast
McDonald's Dinner
Moe's Breakfast
Moe's Lunch
Steak & Shrimp House Dinner
qid & accept id:
(9789395, 9789631)
query:
Ignore emails that match a regexp in Postgres
soup:
Escape (double escape) the plus sign:
\nE'^(info)\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'\n here __^^\n
\nMoreover, there're no need to make a group with (info)
\nE'^info\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'\n
\n
soup wrap:
Escape (double escape) the plus sign:
E'^(info)\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'
here __^^
Moreover, there're no need to make a group with (info)
E'^info\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'
qid & accept id:
(9807875, 9809089)
query:
SQL: Feeding SELECT output to LIKE
soup:
with prefix_list as (\n select regexp_substr( str1, '^[A-Z]*' ) prefix from t1 where str2 = 'NAME1'\n)\nselect t1.str1 from t1 join prefix_list\n on t1.str1 = prefix_list.prefix\n or regexp_like( t1.str1, prefix_list.prefix||'_[0-9]' )\n
\nTo do it without the regexp functions (for older Oracle versions), it depends a bit on how much you want to validate the format of the strings.
\nselect t1.str1\n from (\n select case when instr( str1, '_' ) > 0\n then substr( str1, 1, instr( str1, '_' ) - 1 )\n else str1\n end prefix\n from t1 where str2 = 'NAME1'\n) prefix_list,\n t1\nwhere t1.str1 = prefix\n or t2.str1 like prefix || '\__' escape '\'\n
\n
soup wrap:
with prefix_list as (
select regexp_substr( str1, '^[A-Z]*' ) prefix from t1 where str2 = 'NAME1'
)
select t1.str1 from t1 join prefix_list
on t1.str1 = prefix_list.prefix
or regexp_like( t1.str1, prefix_list.prefix||'_[0-9]' )
To do it without the regexp functions (for older Oracle versions), it depends a bit on how much you want to validate the format of the strings.
select t1.str1
from (
select case when instr( str1, '_' ) > 0
then substr( str1, 1, instr( str1, '_' ) - 1 )
else str1
end prefix
from t1 where str2 = 'NAME1'
) prefix_list,
t1
where t1.str1 = prefix
or t2.str1 like prefix || '\__' escape '\'
qid & accept id:
(9861297, 9861596)
query:
Fixing duplicate customers in SQL
soup:
Update the Order table:
\nUPDATE o\nSET o.person_id = cc.max_person_id\nFROM\n [Order] AS o\n JOIN\n Customer AS c\n ON c.person_id = o.person_id\n JOIN\n ( SELECT customer_id\n , MAX(person_id) AS max_person_id\n FROM Customer\n GROUP BY customer_id\n ) AS cc\n ON cc.customer_id = c.customer_id ;\n
\nThen, update the Customer table:
\nUPDATE c\nSET c.person_id = cc.max_person_id\nFROM\n Customer AS c\n JOIN\n ( SELECT customer_id\n , MAX(person_id) AS max_person_id\n FROM Customer\n GROUP BY customer_id\n ) AS cc\n ON cc.customer_id = c.customer_id ;\n
\n
\nAfter that, it would be good to have Customer(person_id) defined as PRIMARY KEY or with a UNIQUE constraint.
\nAnd a FOREIGN KEY constraint from Order(person_id) to Customer(person_id)
\n
soup wrap:
Update the Order table:
UPDATE o
SET o.person_id = cc.max_person_id
FROM
[Order] AS o
JOIN
Customer AS c
ON c.person_id = o.person_id
JOIN
( SELECT customer_id
, MAX(person_id) AS max_person_id
FROM Customer
GROUP BY customer_id
) AS cc
ON cc.customer_id = c.customer_id ;
Then, update the Customer table:
UPDATE c
SET c.person_id = cc.max_person_id
FROM
Customer AS c
JOIN
( SELECT customer_id
, MAX(person_id) AS max_person_id
FROM Customer
GROUP BY customer_id
) AS cc
ON cc.customer_id = c.customer_id ;
After that, it would be good to have Customer(person_id) defined as PRIMARY KEY or with a UNIQUE constraint.
And a FOREIGN KEY constraint from Order(person_id) to Customer(person_id)
qid & accept id:
(9919278, 9921779)
query:
SQL multiple replace
soup:
Ed Northridge's answer will work, and I have upvoted it, but just in case multiple replacements are required I am adding another option using his sample data. If, for example one of the companies was called "The PC Company LTD" This would duplicate rows in the output with one being "The PC LTD" and the other "The PC Company". To resolve this there are 2 option depending on your desired outcome. The first is to only replace the "Bad Strings" when they occur at the end of the name.
\nSELECT c.ID, RTRIM(x.Name) [Name]\nFROM @companies c\n OUTER APPLY \n ( SELECT REPLACE(c.name, item, '') AS [Name]\n FROM @badStrings\n -- WHERE CLAUSE ADDED HERE\n WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n ) x\nWHERE c.name != '' \nAND x.[Name] != c.Name\n
\nThis would yield "The PC Company" with no duplicates.
\nThe other option is replace All occurances of the Bad Strings recursively:
\n;WITH CTE AS\n( SELECT c.ID, c.Name [OriginalName], RTRIM(x.Name) [Name], 1 [Level]\n FROM @companies c\n OUTER APPLY \n ( SELECT REPLACE(c.name, item, '') AS [Name]\n FROM @badStrings\n WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n ) x\n WHERE c.name != '' \n AND RTRIM(x.Name) != c.Name\n UNION ALL\n SELECT c.ID, OriginalName, RTRIM(x.Name) [Name], Level + 1 [Level]\n FROM CTE c\n OUTER APPLY \n ( SELECT REPLACE(c.name, item, '') AS [Name]\n FROM @badStrings\n WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n ) x\n WHERE c.name != '' \n AND x.[Name] != c.Name \n)\n\nSELECT DISTINCT ID, Name, OriginalName\nFROM ( SELECT *, MAX(Level) OVER(PARTITION BY ID) [MaxLevel]\n FROM CTE\n ) c\nWHERE Level = maxLevel\n
\nThis would yield "The PC" from "The PC Company".
\n
soup wrap:
Ed Northridge's answer will work, and I have upvoted it, but just in case multiple replacements are required I am adding another option using his sample data. If, for example one of the companies was called "The PC Company LTD" This would duplicate rows in the output with one being "The PC LTD" and the other "The PC Company". To resolve this there are 2 option depending on your desired outcome. The first is to only replace the "Bad Strings" when they occur at the end of the name.
SELECT c.ID, RTRIM(x.Name) [Name]
FROM @companies c
OUTER APPLY
( SELECT REPLACE(c.name, item, '') AS [Name]
FROM @badStrings
-- WHERE CLAUSE ADDED HERE
WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
) x
WHERE c.name != ''
AND x.[Name] != c.Name
This would yield "The PC Company" with no duplicates.
The other option is replace All occurances of the Bad Strings recursively:
;WITH CTE AS
( SELECT c.ID, c.Name [OriginalName], RTRIM(x.Name) [Name], 1 [Level]
FROM @companies c
OUTER APPLY
( SELECT REPLACE(c.name, item, '') AS [Name]
FROM @badStrings
WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
) x
WHERE c.name != ''
AND RTRIM(x.Name) != c.Name
UNION ALL
SELECT c.ID, OriginalName, RTRIM(x.Name) [Name], Level + 1 [Level]
FROM CTE c
OUTER APPLY
( SELECT REPLACE(c.name, item, '') AS [Name]
FROM @badStrings
WHERE CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
) x
WHERE c.name != ''
AND x.[Name] != c.Name
)
SELECT DISTINCT ID, Name, OriginalName
FROM ( SELECT *, MAX(Level) OVER(PARTITION BY ID) [MaxLevel]
FROM CTE
) c
WHERE Level = maxLevel
This would yield "The PC" from "The PC Company".
qid & accept id:
(10011337, 10013471)
query:
How to design a database table to enforce non-duplicate Unique Key records
soup:
Okay, this isn't the prettiest of code, but it does enforce the constraint, I think. The trick is to create an indexed view with two unique indexes defined on it:
\ncreate table dbo.ABC (\n Col1 int not null,\n Col2 int not null\n)\ngo\ncreate view dbo.ABC_Col1_Col2_dep\nwith schemabinding\nas\n select Col1,Col2,COUNT_BIG(*) as Cnt\n from\n dbo.ABC\n group by\n Col1,Col2\ngo\ncreate unique clustered index IX_Col1_UniqueCol2 on dbo.ABC_Col1_Col2_dep (Col1)\ngo\ncreate unique nonclustered index IX_Col2_UniqueCol1 on dbo.ABC_Col1_Col2_dep (Col2)\ngo\n
\nNow we insert some initial data:
\ninsert into dbo.ABC (Col1,Col2)\nselect 1,3 union all\nselect 2,19 union all\nselect 3,12\n
\nWe can add another row with exactly the same values for Col1 and Col2:
\ninsert into dbo.ABC (Col1,Col2)\nselect 1,3\n
\nBut if we pick a value for Col2 that has been used for another Col1, or vice versa, we get errors:
\ninsert into dbo.ABC (Col1,Col2)\nselect 2,3\ngo\ninsert into dbo.ABC (Col1,Col2)\nselect 1,5\n
\n
\nThe trick here was to observe that this query:
\n select Col1,Col2,COUNT_BIG(*) as Cnt\n from\n dbo.ABC\n group by\n Col1,Col2\n
\nwill only have one row for a particular Col1 value, and only one row with a particular Col2 value, provided that the constraint you're seeking to enforce has not been broken - but as soon as a non-matching row is inserted into the base table, this query returns multiple rows.
\n
soup wrap:
Okay, this isn't the prettiest of code, but it does enforce the constraint, I think. The trick is to create an indexed view with two unique indexes defined on it:
create table dbo.ABC (
Col1 int not null,
Col2 int not null
)
go
create view dbo.ABC_Col1_Col2_dep
with schemabinding
as
select Col1,Col2,COUNT_BIG(*) as Cnt
from
dbo.ABC
group by
Col1,Col2
go
create unique clustered index IX_Col1_UniqueCol2 on dbo.ABC_Col1_Col2_dep (Col1)
go
create unique nonclustered index IX_Col2_UniqueCol1 on dbo.ABC_Col1_Col2_dep (Col2)
go
Now we insert some initial data:
insert into dbo.ABC (Col1,Col2)
select 1,3 union all
select 2,19 union all
select 3,12
We can add another row with exactly the same values for Col1 and Col2:
insert into dbo.ABC (Col1,Col2)
select 1,3
But if we pick a value for Col2 that has been used for another Col1, or vice versa, we get errors:
insert into dbo.ABC (Col1,Col2)
select 2,3
go
insert into dbo.ABC (Col1,Col2)
select 1,5
The trick here was to observe that this query:
select Col1,Col2,COUNT_BIG(*) as Cnt
from
dbo.ABC
group by
Col1,Col2
will only have one row for a particular Col1 value, and only one row with a particular Col2 value, provided that the constraint you're seeking to enforce has not been broken - but as soon as a non-matching row is inserted into the base table, this query returns multiple rows.
qid & accept id:
(10019557, 10019607)
query:
Join SQL Server tables on a like statement
soup:
Cast StateID to a compatible type, e.g.
\nWHERE URL LIKE '%' + CONVERT(varchar(50), StateID) + '%'\n
\nor
\nWHERE URL LIKE N'%' + CONVERT(nvarchar(50), StateID) + N'%'\n
\nif URL is nvarchar(...)
\nEDIT
\nAs pointed out in another answer, this could result in poor performance on large tables.\nThe LIKE combined with a CONVERT will result in a table scan. This may not be a problem for small tables, but you should consider splitting the URL into two columns if performance becomes a problem. One column would contain 'page.aspx?id=' and the other the UNIQUEIDENTIFIER. Your query could then be optimized much more easily.
\n
soup wrap:
Cast StateID to a compatible type, e.g.
WHERE URL LIKE '%' + CONVERT(varchar(50), StateID) + '%'
or
WHERE URL LIKE N'%' + CONVERT(nvarchar(50), StateID) + N'%'
if URL is nvarchar(...)
EDIT
As pointed out in another answer, this could result in poor performance on large tables.
The LIKE combined with a CONVERT will result in a table scan. This may not be a problem for small tables, but you should consider splitting the URL into two columns if performance becomes a problem. One column would contain 'page.aspx?id=' and the other the UNIQUEIDENTIFIER. Your query could then be optimized much more easily.
qid & accept id:
(10025996, 10026264)
query:
Selecting different condition based on presence of association?
soup:
I'm assuming event contains the period association?
\nIn any case you want a left join between the discounts table and the periods table. This will give you the period data to do the begin = today where clause, and null if there is no period. Thus the SQL to select the data would be
\nSELECT [columns]\nFROM discounts_table\nLEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id\nWHERE (periods_table.begin = [today]) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN [yesterday] AND [today])\n
\nin rails you should be able to achieve this as follows:
\nDiscount\n .joins("LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id")\n .where("(periods_table.begin = ?) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN ? AND ?)", today, today, 1.day.ago.to_date)\n
\nUnfortunately you need the use SQL statements rather than letting rails create it for you as:
\n\n- joins with a symbol only creates an INNER JOIN, not a LEFT JOIN
\n- where with symbols, hashes etc will combine conditions using AND, not OR
\n
\n
soup wrap:
I'm assuming event contains the period association?
In any case you want a left join between the discounts table and the periods table. This will give you the period data to do the begin = today where clause, and null if there is no period. Thus the SQL to select the data would be
SELECT [columns]
FROM discounts_table
LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id
WHERE (periods_table.begin = [today]) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN [yesterday] AND [today])
in rails you should be able to achieve this as follows:
Discount
.joins("LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id")
.where("(periods_table.begin = ?) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN ? AND ?)", today, today, 1.day.ago.to_date)
Unfortunately you need the use SQL statements rather than letting rails create it for you as:
- joins with a symbol only creates an INNER JOIN, not a LEFT JOIN
- where with symbols, hashes etc will combine conditions using AND, not OR
qid & accept id:
(10035769, 10035842)
query:
Query to Select Between Two Times of Day
soup:
Since you're on SQL Server 2008, you can use the new TIME datatype:
\nSELECT * FROM MyTable\nWHERE CAST(SyncDate AS TIME) BETWEEN '14:00' and '14:30'\n
\nIf your backend isn't 2008 yet :-) then you'd need something like:
\nSELECT * FROM MyTable\nWHERE DATEPART(HOUR, SyncDate) = 14 AND DATEPART(MINUTE, SyncDate) BETWEEN 0 AND 30\n
\nto check for 14:00-14:30 hours.
\n
soup wrap:
Since you're on SQL Server 2008, you can use the new TIME datatype:
SELECT * FROM MyTable
WHERE CAST(SyncDate AS TIME) BETWEEN '14:00' and '14:30'
If your backend isn't 2008 yet :-) then you'd need something like:
SELECT * FROM MyTable
WHERE DATEPART(HOUR, SyncDate) = 14 AND DATEPART(MINUTE, SyncDate) BETWEEN 0 AND 30
to check for 14:00-14:30 hours.
qid & accept id:
(10109770, 10110046)
query:
get all child from an parent id
soup:
This should do it for you:
\ncreate table #temp \n(\n id int, \n parentid int,\n data varchar(1)\n)\ninsert #temp (id, parentid, data) values (1, -1, 'a')\ninsert #temp (id, parentid, data) values (2,1, 'b')\ninsert #temp (id, parentid, data) values (3,2, 'c')\ninsert #temp (id, parentid, data) values (4,3, 'd')\ninsert #temp (id, parentid, data) values (5,3, 'f')\n\n; with cte as (\n select id, parentid, data, id as topparent\n from #temp\n union all\n select child.id, child.parentid, child.data, parent.topparent\n from #temp child\n join cte parent\n on parent.id = child.parentid\n\n)\nselect id, parentid, data\nfrom cte\nwhere topparent = 2\n\ndrop table #temp\n
\nEDIT or you can put the WHERE clause inside the first select
\ncreate table #temp \n(\n id int, \n parentid int,\n data varchar(1)\n)\ninsert #temp (id, parentid, data) values (1, -1, 'a')\ninsert #temp (id, parentid, data) values (2,1, 'b')\ninsert #temp (id, parentid, data) values (3,2, 'c')\ninsert #temp (id, parentid, data) values (4,3, 'd')\ninsert #temp (id, parentid, data) values (5,3, 'f')\n\n; with cte as (\n select id, parentid, data, id as topparent\n from #temp\n WHERE id = 2\n union all\n select child.id, child.parentid, child.data, parent.topparent\n from #temp child\n join cte parent\n on parent.id = child.parentid\n\n)\nselect id, parentid, data\nfrom cte\n\ndrop table #temp\n
\nResults:
\nid parentid data\n2 1 b\n3 2 c\n4 3 d\n5 3 f\n
\n
soup wrap:
This should do it for you:
create table #temp
(
id int,
parentid int,
data varchar(1)
)
insert #temp (id, parentid, data) values (1, -1, 'a')
insert #temp (id, parentid, data) values (2,1, 'b')
insert #temp (id, parentid, data) values (3,2, 'c')
insert #temp (id, parentid, data) values (4,3, 'd')
insert #temp (id, parentid, data) values (5,3, 'f')
; with cte as (
select id, parentid, data, id as topparent
from #temp
union all
select child.id, child.parentid, child.data, parent.topparent
from #temp child
join cte parent
on parent.id = child.parentid
)
select id, parentid, data
from cte
where topparent = 2
drop table #temp
EDIT or you can put the WHERE clause inside the first select
create table #temp
(
id int,
parentid int,
data varchar(1)
)
insert #temp (id, parentid, data) values (1, -1, 'a')
insert #temp (id, parentid, data) values (2,1, 'b')
insert #temp (id, parentid, data) values (3,2, 'c')
insert #temp (id, parentid, data) values (4,3, 'd')
insert #temp (id, parentid, data) values (5,3, 'f')
; with cte as (
select id, parentid, data, id as topparent
from #temp
WHERE id = 2
union all
select child.id, child.parentid, child.data, parent.topparent
from #temp child
join cte parent
on parent.id = child.parentid
)
select id, parentid, data
from cte
drop table #temp
Results:
id parentid data
2 1 b
3 2 c
4 3 d
5 3 f
qid & accept id:
(10121680, 10121774)
query:
SQL query over multiple rows
soup:
Another approach would be -
\nSELECT housing_id\nFROM mytable\nWHERE facility_id IN (4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 2\n
\nUPDATE - inspired by the comment by Josvic I decided to do some more testing and thought I would include my findings.
\nOne of the benefits of using this query is that it is easy to modify to include more facility_ids. If you want to find all housing_ids that have facility_ids 1, 3, 4 & 7 you just do -
\nSELECT housing_id\nFROM mytable\nWHERE facility_id IN (1,3,4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 4\n
\nThe performance of all three of these queries varies hugely based on the indexing strategy employed. I was unable to get reasonable performance, on my test dataset, from the dependant subquery version regardless of indexing used.
\nThe self join solution provided by Tim performs very well given separate single column indices on the two columns but does not perform quite so well as the number of criteria increases.
\nHere are some basic stats on my test table - 500k rows - 147963 housing_ids with potential values for facility_id between 1 and 9.
\nHere are the indices used for running all these tests -
\nSHOW INDEXES FROM mytable;\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n| mytable | 0 | UQ_housing_facility | 1 | housing_id | A | 500537 | NULL | NULL | | BTREE |\n| mytable | 0 | UQ_housing_facility | 2 | facility_id | A | 500537 | NULL | NULL | | BTREE |\n| mytable | 0 | UQ_facility_housing | 1 | facility_id | A | 12 | NULL | NULL | | BTREE |\n| mytable | 0 | UQ_facility_housing | 2 | housing_id | A | 500537 | NULL | NULL | | BTREE |\n| mytable | 1 | IX_housing | 1 | housing_id | A | 500537 | NULL | NULL | | BTREE |\n| mytable | 1 | IX_facility | 1 | facility_id | A | 12 | NULL | NULL | | BTREE |\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n
\nFirst query tested is the dependant subquery -
\nSELECT SQL_NO_CACHE DISTINCT housing_id\nFROM mytable\nWHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);\n\n17321 rows in set (9.15 sec)\n\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| 1 | PRIMARY | mytable | range | NULL | IX_housing | 4 | NULL | 500538 | Using where; Using index for group-by |\n| 3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n| 2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n\nSELECT SQL_NO_CACHE DISTINCT housing_id\nFROM mytable\nWHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=1)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=3)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);\n\n567 rows in set (9.30 sec)\n\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| 1 | PRIMARY | mytable | range | NULL | IX_housing | 4 | NULL | 500538 | Using where; Using index for group-by |\n| 5 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n| 4 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n| 3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n| 2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n
\nNext is my version using the GROUP BY ... HAVING COUNT ...
\nSELECT SQL_NO_CACHE housing_id\nFROM mytable\nWHERE facility_id IN (4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 2;\n\n17321 rows in set (0.79 sec)\n\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| 1 | SIMPLE | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4 | NULL | 198646 | Using where; Using index; Using filesort |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n\nSELECT SQL_NO_CACHE housing_id\nFROM mytable\nWHERE facility_id IN (1,3,4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 4;\n\n567 rows in set (1.25 sec)\n\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| 1 | SIMPLE | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4 | NULL | 407160 | Using where; Using index; Using filesort |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n
\nAnd last but not least the self join -
\nSELECT SQL_NO_CACHE a.housing_id\nFROM mytable a\nINNER JOIN mytable b\n ON a.housing_id = b.housing_id\nWHERE a.facility_id = 4 AND b.facility_id = 7;\n\n17321 rows in set (1.37 sec)\n\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n| 1 | SIMPLE | b | ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility | 4 | const | 94598 | Using index |\n| 1 | SIMPLE | a | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n\nSELECT SQL_NO_CACHE a.housing_id\nFROM mytable a\nINNER JOIN mytable b\n ON a.housing_id = b.housing_id\nINNER JOIN mytable c\n ON a.housing_id = c.housing_id\nINNER JOIN mytable d\n ON a.housing_id = d.housing_id\nWHERE a.facility_id = 1\nAND b.facility_id = 3\nAND c.facility_id = 4\nAND d.facility_id = 7;\n\n567 rows in set (1.64 sec)\n\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n| 1 | SIMPLE | b | ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility | 4 | const | 93782 | Using index |\n| 1 | SIMPLE | d | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |\n| 1 | SIMPLE | c | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |\n| 1 | SIMPLE | a | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.d.housing_id,const | 1 | Using where; Using index |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n
\n
soup wrap:
Another approach would be -
SELECT housing_id
FROM mytable
WHERE facility_id IN (4,7)
GROUP BY housing_id
HAVING COUNT(DISTINCT facility_id) = 2
UPDATE - inspired by the comment by Josvic I decided to do some more testing and thought I would include my findings.
One of the benefits of using this query is that it is easy to modify to include more facility_ids. If you want to find all housing_ids that have facility_ids 1, 3, 4 & 7 you just do -
SELECT housing_id
FROM mytable
WHERE facility_id IN (1,3,4,7)
GROUP BY housing_id
HAVING COUNT(DISTINCT facility_id) = 4
The performance of all three of these queries varies hugely based on the indexing strategy employed. I was unable to get reasonable performance, on my test dataset, from the dependant subquery version regardless of indexing used.
The self join solution provided by Tim performs very well given separate single column indices on the two columns but does not perform quite so well as the number of criteria increases.
Here are some basic stats on my test table - 500k rows - 147963 housing_ids with potential values for facility_id between 1 and 9.
Here are the indices used for running all these tests -
SHOW INDEXES FROM mytable;
+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
| mytable | 0 | UQ_housing_facility | 1 | housing_id | A | 500537 | NULL | NULL | | BTREE |
| mytable | 0 | UQ_housing_facility | 2 | facility_id | A | 500537 | NULL | NULL | | BTREE |
| mytable | 0 | UQ_facility_housing | 1 | facility_id | A | 12 | NULL | NULL | | BTREE |
| mytable | 0 | UQ_facility_housing | 2 | housing_id | A | 500537 | NULL | NULL | | BTREE |
| mytable | 1 | IX_housing | 1 | housing_id | A | 500537 | NULL | NULL | | BTREE |
| mytable | 1 | IX_facility | 1 | facility_id | A | 12 | NULL | NULL | | BTREE |
+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
First query tested is the dependant subquery -
SELECT SQL_NO_CACHE DISTINCT housing_id
FROM mytable
WHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)
AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);
17321 rows in set (9.15 sec)
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
| 1 | PRIMARY | mytable | range | NULL | IX_housing | 4 | NULL | 500538 | Using where; Using index for group-by |
| 3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
| 2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
SELECT SQL_NO_CACHE DISTINCT housing_id
FROM mytable
WHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=1)
AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=3)
AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)
AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);
567 rows in set (9.30 sec)
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
| 1 | PRIMARY | mytable | range | NULL | IX_housing | 4 | NULL | 500538 | Using where; Using index for group-by |
| 5 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
| 4 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
| 3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
| 2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | func,const | 1 | Using index; Using where |
+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
Next is my version using the GROUP BY ... HAVING COUNT ...
SELECT SQL_NO_CACHE housing_id
FROM mytable
WHERE facility_id IN (4,7)
GROUP BY housing_id
HAVING COUNT(DISTINCT facility_id) = 2;
17321 rows in set (0.79 sec)
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
| 1 | SIMPLE | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4 | NULL | 198646 | Using where; Using index; Using filesort |
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
SELECT SQL_NO_CACHE housing_id
FROM mytable
WHERE facility_id IN (1,3,4,7)
GROUP BY housing_id
HAVING COUNT(DISTINCT facility_id) = 4;
567 rows in set (1.25 sec)
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
| 1 | SIMPLE | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4 | NULL | 407160 | Using where; Using index; Using filesort |
+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
And last but not least the self join -
SELECT SQL_NO_CACHE a.housing_id
FROM mytable a
INNER JOIN mytable b
ON a.housing_id = b.housing_id
WHERE a.facility_id = 4 AND b.facility_id = 7;
17321 rows in set (1.37 sec)
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
| 1 | SIMPLE | b | ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility | 4 | const | 94598 | Using index |
| 1 | SIMPLE | a | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
SELECT SQL_NO_CACHE a.housing_id
FROM mytable a
INNER JOIN mytable b
ON a.housing_id = b.housing_id
INNER JOIN mytable c
ON a.housing_id = c.housing_id
INNER JOIN mytable d
ON a.housing_id = d.housing_id
WHERE a.facility_id = 1
AND b.facility_id = 3
AND c.facility_id = 4
AND d.facility_id = 7;
567 rows in set (1.64 sec)
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
| 1 | SIMPLE | b | ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility | 4 | const | 93782 | Using index |
| 1 | SIMPLE | d | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |
| 1 | SIMPLE | c | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.b.housing_id,const | 1 | Using index |
| 1 | SIMPLE | a | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8 | test.d.housing_id,const | 1 | Using where; Using index |
+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
qid & accept id:
(10139181, 10139312)
query:
Extract maximum value with object name in Access using SQL - handle identical values
soup:
SELECT a.custID, MAX(a.product), MAX(a.price)\nFROM orders AS a \nWHERE a.price = (select MAX(b.price) from orders b where a.custID=b.custID)\nGROUP by a.custID\n
\nJust a side note:
\nIf you have a more advanced SQL server that supports windowing functions, like SQL Server 2008 you can instead write
\nSELECT custID, product, price FROM (\n SELECT custID, product, price, ROW_NUMBER()\n OVER (partition by custid order by price desc) AS rowNo\n FROM orders \n) AS a\nWHERE a.rowNo = 1\n
\n
soup wrap:
SELECT a.custID, MAX(a.product), MAX(a.price)
FROM orders AS a
WHERE a.price = (select MAX(b.price) from orders b where a.custID=b.custID)
GROUP by a.custID
Just a side note:
If you have a more advanced SQL server that supports windowing functions, like SQL Server 2008 you can instead write
SELECT custID, product, price FROM (
SELECT custID, product, price, ROW_NUMBER()
OVER (partition by custid order by price desc) AS rowNo
FROM orders
) AS a
WHERE a.rowNo = 1
qid & accept id:
(10164354, 10164387)
query:
Designing a table with a column need to stored in four different languages
soup:
I recommend against using either of these methods you described. Instead, create a single highlight table with 3 columns:
\nCREATE TABLE highlight \n(\n article_id INT NOT NULL,\n language VARCHAR(),\n highlight_text VARCHAR() CHARACTER SET utf8,\n PRIMARY KEY (article_id, language),\n FOREIGN KEY (article_id) REFERENCES articles (article_id)\n)\n
\nEach row links to an article by article_id, and contains a language version and the relevant text. This allows you to add as many languages as you ever need to, and it doesn't matter if one is missing for an article - it simply doesn't appear in the table. It also allows you to use entirely different language sets per article if it ever becomes necessary.
\nValues then look like:
\n2 en The English text for article 2\n2 dr The French text for article 2\n2 de The German text for article 2\n3 en The English text for article 3\n3 dr The French text for article 3\n3 de The German text for article 3\n3 sw Oh wait, article 3 also needed Swahili text!\n
\n
soup wrap:
I recommend against using either of these methods you described. Instead, create a single highlight table with 3 columns:
CREATE TABLE highlight
(
article_id INT NOT NULL,
language VARCHAR(),
highlight_text VARCHAR() CHARACTER SET utf8,
PRIMARY KEY (article_id, language),
FOREIGN KEY (article_id) REFERENCES articles (article_id)
)
Each row links to an article by article_id, and contains a language version and the relevant text. This allows you to add as many languages as you ever need to, and it doesn't matter if one is missing for an article - it simply doesn't appear in the table. It also allows you to use entirely different language sets per article if it ever becomes necessary.
Values then look like:
2 en The English text for article 2
2 dr The French text for article 2
2 de The German text for article 2
3 en The English text for article 3
3 dr The French text for article 3
3 de The German text for article 3
3 sw Oh wait, article 3 also needed Swahili text!
qid & accept id:
(10182533, 10182629)
query:
Efficient query to split a delimited column into a separate table
soup:
Create a split function:
\nCREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))\nRETURNS TABLE\nAS\n RETURN ( SELECT Item FROM\n ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')\n FROM ( SELECT [XML] = CONVERT(XML, ''\n + REPLACE(@List, '.', '') + '').query('.')\n ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y\n WHERE Item IS NOT NULL\n );\nGO\n
\nThen get rid of all the cursor and looping nonsense and do this:
\nINSERT dbo.mrhierlookup\n(\n heiraui,\n aui\n)\nSELECT s.Item, m.aui\n FROM dbo.mrhier3 AS m\n CROSS APPLY dbo.SplitStrings(m.ptr) AS s\nGROUP BY s.Item, m.aui;\n
\n
soup wrap:
Create a split function:
CREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))
RETURNS TABLE
AS
RETURN ( SELECT Item FROM
( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, ''
+ REPLACE(@List, '.', '') + '').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Then get rid of all the cursor and looping nonsense and do this:
INSERT dbo.mrhierlookup
(
heiraui,
aui
)
SELECT s.Item, m.aui
FROM dbo.mrhier3 AS m
CROSS APPLY dbo.SplitStrings(m.ptr) AS s
GROUP BY s.Item, m.aui;
qid & accept id:
(10189129, 10189450)
query:
Retrieving the previous 10 results from a table that are nearest to a certain date and maintaining ascending sorting order
soup:
This is a simple one, LIMIT the subquery
\nSELECT ci.*\nFROM `calendar_item` AS `ci` \nWHERE (ci.id IN (\n SELECT id FROM calendar_item \n WHERE (end_time < FROM_UNIXTIME(1334667600))\n ORDER BY end_time DESC\n LIMIT 10\n))\nGROUP BY `ci`.`id` \nORDER BY `ci`.`end_time` ASC\nLIMIT 10\n
\nWithout the limit in the subquery you are selecting ALL rows with a timestamp < FROM_UNIXTIMESTAMP. You are then reordering ASC and selecting the first 10, i.e. the earliest 10.
\nIf you limit the subquery you get the 10 highest which satisfy your FROM_UNIXTIME, and the outer can then select them.
\nAn alternative (and my preferred) would be the following, where the subquery gets the data, and the outer query simply reorders it before spitting it back out.
\nSELECT i.*\nFROM (\n SELECT ci.*\n FROM calendar_item AS ci\n WHERE ci.end_time < FROM_UNIXTIME(1334667600)\n ORDER BY ci.end_time DESC\n LIMIT 10\n) AS i\nORDER BY i.`end_time` ASC\n
\n
soup wrap:
This is a simple one, LIMIT the subquery
SELECT ci.*
FROM `calendar_item` AS `ci`
WHERE (ci.id IN (
SELECT id FROM calendar_item
WHERE (end_time < FROM_UNIXTIME(1334667600))
ORDER BY end_time DESC
LIMIT 10
))
GROUP BY `ci`.`id`
ORDER BY `ci`.`end_time` ASC
LIMIT 10
Without the limit in the subquery you are selecting ALL rows with a timestamp < FROM_UNIXTIMESTAMP. You are then reordering ASC and selecting the first 10, i.e. the earliest 10.
If you limit the subquery you get the 10 highest which satisfy your FROM_UNIXTIME, and the outer can then select them.
An alternative (and my preferred) would be the following, where the subquery gets the data, and the outer query simply reorders it before spitting it back out.
SELECT i.*
FROM (
SELECT ci.*
FROM calendar_item AS ci
WHERE ci.end_time < FROM_UNIXTIME(1334667600)
ORDER BY ci.end_time DESC
LIMIT 10
) AS i
ORDER BY i.`end_time` ASC
qid & accept id:
(10196144, 10196628)
query:
SQL include duplicates in an SELECT statement
soup:
It seems like you just want something like this:
\nSELECT C_NAME, AnswerNum\nFROM\n(\nSELECT C.C_NAME, "1" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n JOIN TBL_ANSWERS T \n ON T.ANSWER1_ID = C.C_ID \nUNION ALL\nSELECT C.C_NAME, "2" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n JOIN TBL_ANSWERS T \n ON T.ANSWER2_ID = C.C_ID \n...\nUNION ALL\nSELECT C.C_NAME, "8" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n JOIN TBL_ANSWERS T \n ON T.ANSWER8_ID = C.C_ID \n) AS AnswersJoined\nWHERE USER_ID = '4' \n
\nHowever, I would seriously consider reworking your tables so that you use relationship mapping tables to figure out the questions and answers. This would allow this to be more easily created in one query
\nSomething like
\nTbl_Answer
\n Question_Id|User_Id|Response_Id\n
\nTbl_Question
\n Id|QuestionNumber\n
\nThis would allow you to just run a simple BETWEEN. Something like this:
\nSELECT C.Name\nFROM Country C\nWHERE EXISTS\n(\n SELECT 1 \n FROM Tbl_Answer T\n JOIN Tbl_Question Q\n ON Q.Id = T.Question_Id\n WHERE T.User_Id = 4 AND T.Response_Id = C.C_ID\n AND Q.QuestionNumber BETWEEN 1 AND 8\n)\n
\n
soup wrap:
It seems like you just want something like this:
SELECT C_NAME, AnswerNum
FROM
(
SELECT C.C_NAME, "1" AS AnswerNum, T.USER_ID
FROM COUNTRY C
JOIN TBL_ANSWERS T
ON T.ANSWER1_ID = C.C_ID
UNION ALL
SELECT C.C_NAME, "2" AS AnswerNum, T.USER_ID
FROM COUNTRY C
JOIN TBL_ANSWERS T
ON T.ANSWER2_ID = C.C_ID
...
UNION ALL
SELECT C.C_NAME, "8" AS AnswerNum, T.USER_ID
FROM COUNTRY C
JOIN TBL_ANSWERS T
ON T.ANSWER8_ID = C.C_ID
) AS AnswersJoined
WHERE USER_ID = '4'
However, I would seriously consider reworking your tables so that you use relationship mapping tables to figure out the questions and answers. This would allow this to be more easily created in one query
Something like
Tbl_Answer
Question_Id|User_Id|Response_Id
Tbl_Question
Id|QuestionNumber
This would allow you to just run a simple BETWEEN. Something like this:
SELECT C.Name
FROM Country C
WHERE EXISTS
(
SELECT 1
FROM Tbl_Answer T
JOIN Tbl_Question Q
ON Q.Id = T.Question_Id
WHERE T.User_Id = 4 AND T.Response_Id = C.C_ID
AND Q.QuestionNumber BETWEEN 1 AND 8
)
qid & accept id:
(10199927, 10200631)
query:
Find chars in any order in Sql Server
soup:
The easiest thing to do is split and pivot and then join.
\nSo avi becomes three rows in a letters table:
\na\nv\ni\n
\nThen join to the word list with INNER JOIN ON CHARINDEX(letter, word) > 0
\nUse GROUP BY word
\nwith HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)
\nIn this example, I just picked up and modified a cte from here Split a string into individual characters in Sql Server 2005 to avoid having to fool around with a numbers table (but I normally would probably use a numbers table to do my pivot).
\n\nDECLARE @t AS TABLE (search varchar(100));\nINSERT INTO @t VALUES ('avi');\n\nDECLARE @words AS TABLE (word varchar(100));\nINSERT INTO @words VALUES ('avion'), ('iva'), ('name');\nwith cte as\n(\n select substring(search, 1, 1) as letter,\n stuff(search, 1, 1, '') as search,\n 1 as RowID\n from @t\n union all\n select substring(search, 1, 1) as letter,\n stuff(search, 1, 1, '') as search,\n RowID + 1 as RowID\n from cte\n where len(search) > 0\n)\n,letters AS (\n SELECT DISTINCT letter FROM cte\n)\nSELECT words.word\nFROM letters\nINNER JOIN @words AS words\n ON CHARINDEX(letter, word) > 0\nGROUP BY words.word\nHAVING COUNT(*) = (SELECT COUNT(*) FROM letters)\n
\n
soup wrap:
The easiest thing to do is split and pivot and then join.
So avi becomes three rows in a letters table:
a
v
i
Then join to the word list with INNER JOIN ON CHARINDEX(letter, word) > 0
Use GROUP BY word
with HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)
In this example, I just picked up and modified a cte from here Split a string into individual characters in Sql Server 2005 to avoid having to fool around with a numbers table (but I normally would probably use a numbers table to do my pivot).
DECLARE @t AS TABLE (search varchar(100));
INSERT INTO @t VALUES ('avi');
DECLARE @words AS TABLE (word varchar(100));
INSERT INTO @words VALUES ('avion'), ('iva'), ('name');
with cte as
(
select substring(search, 1, 1) as letter,
stuff(search, 1, 1, '') as search,
1 as RowID
from @t
union all
select substring(search, 1, 1) as letter,
stuff(search, 1, 1, '') as search,
RowID + 1 as RowID
from cte
where len(search) > 0
)
,letters AS (
SELECT DISTINCT letter FROM cte
)
SELECT words.word
FROM letters
INNER JOIN @words AS words
ON CHARINDEX(letter, word) > 0
GROUP BY words.word
HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)
qid & accept id:
(10209706, 10209896)
query:
Elegant way to create a circular permutation with MySQL
soup:
You can use the mod operator, % to ORDER BY
\nDECLARE @maxId AS INT\nSELECT @maxId = MAX(Id) FROM MyTable\n\nSELECT id FROM MyTable\nORDER BY Id % @maxId \n
\nYou can get further rotations by adding to Id, ie
\nORDER BY (Id + 1) % @maxId\n
\nget you
\n3\n4\n1\n2\n
\nWorking SQL Fiddle (which I just found out exists)\nhttp://sqlfiddle.com/#!3/a7f15/5
\n
soup wrap:
You can use the mod operator, % to ORDER BY
DECLARE @maxId AS INT
SELECT @maxId = MAX(Id) FROM MyTable
SELECT id FROM MyTable
ORDER BY Id % @maxId
You can get further rotations by adding to Id, ie
ORDER BY (Id + 1) % @maxId
get you
3
4
1
2
Working SQL Fiddle (which I just found out exists)
http://sqlfiddle.com/#!3/a7f15/5
qid & accept id:
(10240035, 10240129)
query:
SQL and Counting
soup:
Give this a try:
\nselect name,\n count(case when grade in ('A', 'B', 'C') then 1 end) totalPass,\n count(case when grade = 'A' then 1 end) totalA,\n count(case when grade = 'B' then 1 end) totalB,\n count(case when grade = 'C' then 1 end) totalC\nfrom t\ngroup by name\n
\nHere is the fiddle.
\nOr we can make it even simpler if you were using MySQL:
\nselect name,\n sum(grade in ('A', 'B', 'C')) totalPass,\n sum(grade = 'A') totalA,\n sum(grade = 'B') totalB,\n sum(grade = 'C') totalC\nfrom t\ngroup by name\n
\nHere is the fiddle.
\n
soup wrap:
Give this a try:
select name,
count(case when grade in ('A', 'B', 'C') then 1 end) totalPass,
count(case when grade = 'A' then 1 end) totalA,
count(case when grade = 'B' then 1 end) totalB,
count(case when grade = 'C' then 1 end) totalC
from t
group by name
Here is the fiddle.
Or we can make it even simpler if you were using MySQL:
select name,
sum(grade in ('A', 'B', 'C')) totalPass,
sum(grade = 'A') totalA,
sum(grade = 'B') totalB,
sum(grade = 'C') totalC
from t
group by name
Here is the fiddle.
qid & accept id:
(10277115, 10277164)
query:
Select newest record group by username in SQL Server 2008
soup:
You have several options here but using adding a ROW_NUMBER grouped by user and sorted (descending) on your timestamp allows you to easily select the latest records.
\nUsing ROW_NUMBER
\nSELECT *\nFROM (\n SELECT ID, voting_ID, username, timestamp, XMLBallot\n , rn = ROW_NUMBER() OVER (PARTITION BY voting_ID, username ORDER BY timestamp DESC)\n FROM Ballots\n ) bt \nWHERE rn = 1\n
\nAlternatively, you can select the maximum timestamp per user and join on that.
\nUsing MAX
\nSELECT bt.ID, bt.voting_ID, bt.username, bt.timestamp, bt.XMLBallot\nFROM Ballots bt\n INNER JOIN (\n SELECT username, voting_ID, timestamp = MAX(timestamp)\n FROM Ballots\n GROUP BY\n username, voting_ID\n ) btm ON btm.username = bt.Username\n AND btm.voting_ID = bt.voting_ID\n AND btm.timestamp = bt.timestamp\n
\n
soup wrap:
You have several options here but using adding a ROW_NUMBER grouped by user and sorted (descending) on your timestamp allows you to easily select the latest records.
Using ROW_NUMBER
SELECT *
FROM (
SELECT ID, voting_ID, username, timestamp, XMLBallot
, rn = ROW_NUMBER() OVER (PARTITION BY voting_ID, username ORDER BY timestamp DESC)
FROM Ballots
) bt
WHERE rn = 1
Alternatively, you can select the maximum timestamp per user and join on that.
Using MAX
SELECT bt.ID, bt.voting_ID, bt.username, bt.timestamp, bt.XMLBallot
FROM Ballots bt
INNER JOIN (
SELECT username, voting_ID, timestamp = MAX(timestamp)
FROM Ballots
GROUP BY
username, voting_ID
) btm ON btm.username = bt.Username
AND btm.voting_ID = bt.voting_ID
AND btm.timestamp = bt.timestamp
qid & accept id:
(10296422, 10296593)
query:
How to assign an id to a group SQL Server
soup:
I'd be tempted to create a separate table, RunInformation, with a primary key column, Id, and a RunDate column:
\nId -- RunDate\n
\nYou could then replace the dateRan column from your table with a reference to the RunInformation table. This will allow you to store additional information about the run in future, if the needs arises.
\nId -- Name -- AttributeIMeasure -- RunInformationId\n
\n
soup wrap:
I'd be tempted to create a separate table, RunInformation, with a primary key column, Id, and a RunDate column:
Id -- RunDate
You could then replace the dateRan column from your table with a reference to the RunInformation table. This will allow you to store additional information about the run in future, if the needs arises.
Id -- Name -- AttributeIMeasure -- RunInformationId
qid & accept id:
(10310499, 10310674)
query:
How to avoid "Ambiguous field in query" without adding Table Name or Table Alias in where clause
soup:
If you for some reason can't live with doing
\nselect T1.name, T1.address, T1.phone, T2.title, T2.description from T1\nLeft Join T2 on T1.CID=T2.ID\nwhere T2.STATUS = 1\n
\nThen I guess you could
\nSELECT T1.name, T1.address, T1.phone, T2.title, T2.description \nFROM ( SELECT CID, name, address, phone\n FROM T1) AS T1\nLEFT JOIN T2\nON T1.CID=T2.ID\nWHERE STATUS = 1\n
\nBasicly just skip getting the STATUS column from T1. Then there can be no conflict.
\nBottomline; there's no simple way of doing this. The one closest to simple would be to have different names of both STATUS columns, but even that seems extreme.
\n
soup wrap:
If you for some reason can't live with doing
select T1.name, T1.address, T1.phone, T2.title, T2.description from T1
Left Join T2 on T1.CID=T2.ID
where T2.STATUS = 1
Then I guess you could
SELECT T1.name, T1.address, T1.phone, T2.title, T2.description
FROM ( SELECT CID, name, address, phone
FROM T1) AS T1
LEFT JOIN T2
ON T1.CID=T2.ID
WHERE STATUS = 1
Basicly just skip getting the STATUS column from T1. Then there can be no conflict.
Bottomline; there's no simple way of doing this. The one closest to simple would be to have different names of both STATUS columns, but even that seems extreme.
qid & accept id:
(10330898, 10330971)
query:
sql query to set year as column name
soup:
Maybe something like this:
\nSELECT \n item_name, \n SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',\n SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012'\nFROM \n item\nJOIN sales ON item.id = sales.item_number\nGROUP BY\n item_name\nORDER BY \n item_name\n
\nEDIT
\nIf you want the other years and still sum them. Then you can do this:
\nSELECT \n item_name, \n SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',\n SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012',\n SUM(CASE WHEN NOT YEAR( DATE ) IN (2011,2012) THEN item_sold_qty ELSE 0 END) AS 'AllOtherYears'\nFROM \n item\nJOIN sales ON item.id = sales.item_number\nGROUP BY\n item_name\nORDER BY \n item_name\n
\nEDIT2
\nIf you have a lot of years and you do not want to keep on adding years. Then you need to using dynamic sql. That means that you concat a varchar of the sql and then execute it.
\nUseful References:
\n\n- MySQL pivot table with dynamic headers based on single column data
\n- How To have Dynamic SQL in MySQL Stored Procedure
\n- MySQL/Pivot table
\n- MYSQL - Rows to Columns
\n
\n
soup wrap:
Maybe something like this:
SELECT
item_name,
SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',
SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012'
FROM
item
JOIN sales ON item.id = sales.item_number
GROUP BY
item_name
ORDER BY
item_name
EDIT
If you want the other years and still sum them. Then you can do this:
SELECT
item_name,
SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',
SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012',
SUM(CASE WHEN NOT YEAR( DATE ) IN (2011,2012) THEN item_sold_qty ELSE 0 END) AS 'AllOtherYears'
FROM
item
JOIN sales ON item.id = sales.item_number
GROUP BY
item_name
ORDER BY
item_name
EDIT2
If you have a lot of years and you do not want to keep on adding years. Then you need to using dynamic sql. That means that you concat a varchar of the sql and then execute it.
Useful References:
- MySQL pivot table with dynamic headers based on single column data
- How To have Dynamic SQL in MySQL Stored Procedure
- MySQL/Pivot table
- MYSQL - Rows to Columns
qid & accept id:
(10338000, 10340501)
query:
how to do content based authorization?
soup:
I think you're on the right track with views, but since each call will need to pass the user ID, it sounds like what you really need are table-valued functions. I'm most familiar with Microsoft SQL, where it would look something like this:
\nSELECT P.*\nFROM Projects AS P\n INNER JOIN dbo.AuthProjects(@UserID) AS AP ON P.ProjectID = AP.ProjectID\n
\nNote that the TVF literally returns a table, to which you would join to see which projects are available. The TVF definition might look something like this:
\nCREATE FUNCTION dbo.AuthProjects(@UserID INT)\n RETURNS @Results TABLE (ProjectID INT NOT NULL, WriteAccess BIT NOT NULL)\nAS BEGIN\n INSERT INTO @Results (ProjectID, WriteAccess)\n SELECT\n ProjectID, WriteAccess\n FROM\n Authorizations\n WHERE\n UserID = @UserID\n\n -- Additional logic for more ways a project may be authorized\n\n RETURN\nEND\n
\n
soup wrap:
I think you're on the right track with views, but since each call will need to pass the user ID, it sounds like what you really need are table-valued functions. I'm most familiar with Microsoft SQL, where it would look something like this:
SELECT P.*
FROM Projects AS P
INNER JOIN dbo.AuthProjects(@UserID) AS AP ON P.ProjectID = AP.ProjectID
Note that the TVF literally returns a table, to which you would join to see which projects are available. The TVF definition might look something like this:
CREATE FUNCTION dbo.AuthProjects(@UserID INT)
RETURNS @Results TABLE (ProjectID INT NOT NULL, WriteAccess BIT NOT NULL)
AS BEGIN
INSERT INTO @Results (ProjectID, WriteAccess)
SELECT
ProjectID, WriteAccess
FROM
Authorizations
WHERE
UserID = @UserID
-- Additional logic for more ways a project may be authorized
RETURN
END
qid & accept id:
(10389260, 10389455)
query:
Select 30% of each column value
soup:
try something like this:
\nDECLARE @YourTable table (A int, b varchar(10))\nINSERT @YourTable VALUES (0, 'hello') --OP's data\nINSERT @YourTable VALUES (0, 'test')\nINSERT @YourTable VALUES (0, 'hi')\nINSERT @YourTable VALUES (1, 'blah1')\nINSERT @YourTable VALUES (1, 'blah2')\nINSERT @YourTable VALUES (1, 'blah3')\nINSERT @YourTable VALUES (1, 'blah4')\nINSERT @YourTable VALUES (1, 'blah5')\nINSERT @YourTable VALUES (1, 'blah6')\n\n;WITH NumberedRows AS\n( SELECT \n A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber\n FROM @YourTable\n)\n, GroupCounts AS\n( SELECT\n A,MAX(RowNumber) AS MaxA\n FROM NumberedRows\n GROUP BY A\n)\nSELECT\n n.a,n.b\n FROM NumberedRows n\n INNER JOIN GroupCounts c ON n.A=c.A\n WHERE n.RowNUmber<=(c.MaxA+1)*0.3\n
\nOUTPUT:
\na b\n----------- ----------\n0 hello\n1 blah1\n1 blah2\n\n(3 row(s) affected)\n
\nEDIT based on the great idea in the comment from Andriy M
\n;WITH NumberedRows AS\n( SELECT \n A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber\n ,COUNT(*) OVER (PARTITION BY A) AS TotalOf\n FROM @YourTable\n)\nSELECT\n n.a,n.b\n FROM NumberedRows n\n WHERE n.RowNumber<=(n.TotalOf+1)*0.3\n ORDER BY A\n
\nOUTPUT:
\na b\n----------- ----------\n0 hello\n1 blah1\n1 blah2\n\n(3 row(s) affected)\n
\nEDIT here are "random" rows, using Andriy M idea:
\nDECLARE @YourTable table (A int, b varchar(10))\nINSERT @YourTable VALUES (0, 'hello') --OP's data\nINSERT @YourTable VALUES (0, 'test')\nINSERT @YourTable VALUES (0, 'hi')\nINSERT @YourTable VALUES (1, 'blah1')\nINSERT @YourTable VALUES (1, 'blah2')\nINSERT @YourTable VALUES (1, 'blah3')\nINSERT @YourTable VALUES (1, 'blah4')\nINSERT @YourTable VALUES (1, 'blah5')\nINSERT @YourTable VALUES (1, 'blah6')\n\n;WITH NumberedRows AS\n( SELECT \n A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY newid()) AS RowNumber\n FROM @YourTable\n)\n, GroupCounts AS (SELECT A,COUNT(A) AS MaxA FROM NumberedRows GROUP BY A)\nSELECT\n n.A,n.B\n FROM NumberedRows n\n INNER JOIN GroupCounts c ON n.A=c.A\n WHERE n.RowNUmber<=(c.MaxA+1)*0.3\n ORDER BY n.A\n
\nOUTPUT:
\na b\n----------- ----------\n0 hi\n1 blah3\n1 blah6\n\n(3 row(s) affected)\n
\n
soup wrap:
try something like this:
DECLARE @YourTable table (A int, b varchar(10))
INSERT @YourTable VALUES (0, 'hello') --OP's data
INSERT @YourTable VALUES (0, 'test')
INSERT @YourTable VALUES (0, 'hi')
INSERT @YourTable VALUES (1, 'blah1')
INSERT @YourTable VALUES (1, 'blah2')
INSERT @YourTable VALUES (1, 'blah3')
INSERT @YourTable VALUES (1, 'blah4')
INSERT @YourTable VALUES (1, 'blah5')
INSERT @YourTable VALUES (1, 'blah6')
;WITH NumberedRows AS
( SELECT
A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber
FROM @YourTable
)
, GroupCounts AS
( SELECT
A,MAX(RowNumber) AS MaxA
FROM NumberedRows
GROUP BY A
)
SELECT
n.a,n.b
FROM NumberedRows n
INNER JOIN GroupCounts c ON n.A=c.A
WHERE n.RowNUmber<=(c.MaxA+1)*0.3
OUTPUT:
a b
----------- ----------
0 hello
1 blah1
1 blah2
(3 row(s) affected)
EDIT based on the great idea in the comment from Andriy M
;WITH NumberedRows AS
( SELECT
A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber
,COUNT(*) OVER (PARTITION BY A) AS TotalOf
FROM @YourTable
)
SELECT
n.a,n.b
FROM NumberedRows n
WHERE n.RowNumber<=(n.TotalOf+1)*0.3
ORDER BY A
OUTPUT:
a b
----------- ----------
0 hello
1 blah1
1 blah2
(3 row(s) affected)
EDIT here are "random" rows, using Andriy M idea:
DECLARE @YourTable table (A int, b varchar(10))
INSERT @YourTable VALUES (0, 'hello') --OP's data
INSERT @YourTable VALUES (0, 'test')
INSERT @YourTable VALUES (0, 'hi')
INSERT @YourTable VALUES (1, 'blah1')
INSERT @YourTable VALUES (1, 'blah2')
INSERT @YourTable VALUES (1, 'blah3')
INSERT @YourTable VALUES (1, 'blah4')
INSERT @YourTable VALUES (1, 'blah5')
INSERT @YourTable VALUES (1, 'blah6')
;WITH NumberedRows AS
( SELECT
A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY newid()) AS RowNumber
FROM @YourTable
)
, GroupCounts AS (SELECT A,COUNT(A) AS MaxA FROM NumberedRows GROUP BY A)
SELECT
n.A,n.B
FROM NumberedRows n
INNER JOIN GroupCounts c ON n.A=c.A
WHERE n.RowNUmber<=(c.MaxA+1)*0.3
ORDER BY n.A
OUTPUT:
a b
----------- ----------
0 hi
1 blah3
1 blah6
(3 row(s) affected)
qid & accept id:
(10423479, 10423557)
query:
MySQL Retrieving data from two tables using inner join syntax
soup:
try this:
\nSELECT a.Event_ID, \n a.Competitor_ID,\n a.Place,\n COALESCE(b.money, 0) as `Money`\nFROM entry a left join prize b\n on (a.event_id = b.event_ID) AND\n (a.place = b.Place)\n
\nhope this helps.
\nEVENT_ID COMPETITOR_ID PLACE MONEY\n101 101 1 120\n101 102 2 60\n101 201 3 30\n101 301 4 0 -- << this is what you're looking for\n102 201 2 5\n103 201 3 40\n
\n
soup wrap:
try this:
SELECT a.Event_ID,
a.Competitor_ID,
a.Place,
COALESCE(b.money, 0) as `Money`
FROM entry a left join prize b
on (a.event_id = b.event_ID) AND
(a.place = b.Place)
hope this helps.
EVENT_ID COMPETITOR_ID PLACE MONEY
101 101 1 120
101 102 2 60
101 201 3 30
101 301 4 0 -- << this is what you're looking for
102 201 2 5
103 201 3 40
qid & accept id:
(10477017, 10477189)
query:
SQL Compare with one column, but returns all columns if matched
soup:
I think you want all rows from Table1 and Table2, such that each IDCodeField values only appears in one of the tables or the other. You wish to exclude rows where the same value appears in both tables.
\nIgnoring, for the moment, the question of what to do if the same value appears in the same table, the simplest query would be:
\nSELECT * from Table1 T1 full outer join Table2\nON T1.IDCodeField = T2.IDCodeField\nWHERE T1.IDCodeField is null or T2.IDCodeField is null\n
\nThis will give you the results, but possibly not in the format you're seeking - the result rows will be as wide as both tables combined, and the columns from the non-matching table will be NULL.
\nOr, we could do it in the UNION style from your question.
\nSELECT * from Table1 where IDCodeField not in (select IDCodeField from Table2)\nUNION ALL\nSELECT * from Table2 where IDCodeField not in (select IDCOdeField from Table1)\n
\nBoth of the above queries will return rows if the same IDCodeField value is duplicated only within a single table. If you wish to exclude this possibility, you might try finding the unique values first:
\n;With UniqueIDs as (\n SELECT IDCodeField\n FROM (\n SELECT IDCodeField from Table1\n union all\n select IDCodeField from Table2) t\n GROUP BY IDCodeField\n HAVING COUNT(*) = 1\n)\nSELECT * from (\n SELECT * from Table1\n union all\n select * from Table2\n) t\n INNER JOIN\nUniqueIDs u\n ON\n t.IDCodeField = u.IDCodeField\n
\n
\n(Of course, all the uses of SELECT * above should be replaced with appropriate column lists)
\n
soup wrap:
I think you want all rows from Table1 and Table2, such that each IDCodeField values only appears in one of the tables or the other. You wish to exclude rows where the same value appears in both tables.
Ignoring, for the moment, the question of what to do if the same value appears in the same table, the simplest query would be:
SELECT * from Table1 T1 full outer join Table2
ON T1.IDCodeField = T2.IDCodeField
WHERE T1.IDCodeField is null or T2.IDCodeField is null
This will give you the results, but possibly not in the format you're seeking - the result rows will be as wide as both tables combined, and the columns from the non-matching table will be NULL.
Or, we could do it in the UNION style from your question.
SELECT * from Table1 where IDCodeField not in (select IDCodeField from Table2)
UNION ALL
SELECT * from Table2 where IDCodeField not in (select IDCOdeField from Table1)
Both of the above queries will return rows if the same IDCodeField value is duplicated only within a single table. If you wish to exclude this possibility, you might try finding the unique values first:
;With UniqueIDs as (
SELECT IDCodeField
FROM (
SELECT IDCodeField from Table1
union all
select IDCodeField from Table2) t
GROUP BY IDCodeField
HAVING COUNT(*) = 1
)
SELECT * from (
SELECT * from Table1
union all
select * from Table2
) t
INNER JOIN
UniqueIDs u
ON
t.IDCodeField = u.IDCodeField
(Of course, all the uses of SELECT * above should be replaced with appropriate column lists)
qid & accept id:
(10532323, 10554407)
query:
Replicate recent location
soup:
Try this:
\nselect *, CurrentLocation\nfrom tbl x\n\nouter apply\n(\n select top 1 location as CurrentLocation\n from tbl\n where [user] = x.[user]\n and id <= x.id\n order by id\n\n) y\n\norder by id\n
\nOutput:
\nID USER DATE LOCATION CURRENTLOCATION\n1 Tom 2012-03-06 US US\n2 Tom 2012-02-04 UK US\n3 Tom 2012-01-06 Uk US\n4 Bob 2012-03-06 UK UK\n5 Bob 2012-02-04 UK UK\n6 Bob 2012-01-06 AUS UK\n7 Dev 2012-03-06 US US\n8 Dev 2012-02-04 AUS US\n9 Nic 2012-01-06 US US\n
\nLive test: http://www.sqlfiddle.com/#!3/83a6a/7
\n
soup wrap:
Try this:
select *, CurrentLocation
from tbl x
outer apply
(
select top 1 location as CurrentLocation
from tbl
where [user] = x.[user]
and id <= x.id
order by id
) y
order by id
Output:
ID USER DATE LOCATION CURRENTLOCATION
1 Tom 2012-03-06 US US
2 Tom 2012-02-04 UK US
3 Tom 2012-01-06 Uk US
4 Bob 2012-03-06 UK UK
5 Bob 2012-02-04 UK UK
6 Bob 2012-01-06 AUS UK
7 Dev 2012-03-06 US US
8 Dev 2012-02-04 AUS US
9 Nic 2012-01-06 US US
Live test: http://www.sqlfiddle.com/#!3/83a6a/7
qid & accept id:
(10659824, 10659984)
query:
Mysql Count Distinct results
soup:
If you want to simultaneously count the number of rows with multiple specific criteria in a data set, you can use the pattern COUNT(CASE WHEN criteria THEN 1 END). Here's an example that counts the number of rows for stats = 2, and for stats = 3:
\nSELECT\n count(case when stats = 2 then 1 end) as ok,\n count(case when stats = 3 then 1 end) as not_ok\nfrom\n Table1\n
\nResults:
\nOK | NOT_OK\n-----------\n2 | 1\n
\nDemo: http://www.sqlfiddle.com/#!2/82414/1
\n
soup wrap:
If you want to simultaneously count the number of rows with multiple specific criteria in a data set, you can use the pattern COUNT(CASE WHEN criteria THEN 1 END). Here's an example that counts the number of rows for stats = 2, and for stats = 3:
SELECT
count(case when stats = 2 then 1 end) as ok,
count(case when stats = 3 then 1 end) as not_ok
from
Table1
Results:
OK | NOT_OK
-----------
2 | 1
Demo: http://www.sqlfiddle.com/#!2/82414/1
qid & accept id:
(10666965, 10667305)
query:
Joining two MySQL tables, but with additional conditions?
soup:
This is the answer:
\nselect a.id, a.name, a.category, a.price, b.filename as file_name \nfrom products a left join (\n select i.p_id, i.filename from (select id, min(priority) as min_p \n from images group by p_id) q \n left join images i on q.id = i.id\n) b on a.id = b.p_id \nwhere a.category in (1, 2, 3);\n
\nEXPLANATION:
\nFirst, you need to get a set where for each products with lowest priority, which is from this query:
\nselect id, min(priority) as min_p from images group by p_id;\n
\nThe result will be:
\n+----+----------+\n| id | lowest_p |\n+----+----------+\n| 1 | 0 |\n| 2 | 2 |\n| 3 | 2 |\n| 4 | 1 |\n+----+----------+\n4 rows in set (0.00 sec)\n
\nThe next step will be to get an outer join, in this case I'd choose (arbitrarily according to my preference), the left join:
\nselect i.p_id, i.filename from (select id, min(priority) as min_p \nfrom images group by p_id) q left join images i on q.id = i.id;\n
\nThis query produce what you want in short:
\n+------+----------+\n| p_id | filename |\n+------+----------+\n| 1 | image1 |\n| 2 | image3 |\n| 3 | image4 |\n| 4 | image7 |\n+------+----------+\n4 rows in set (0.00 sec)\n
\nNow you just need to decorate this, again using left join:
\nselect a.id, a.name, a.category, a.price, b.filename as file_name \nfrom products a left join (\n select i.p_id, i.filename from (select id, min(priority) as min_p \n from images group by p_id) q \n left join images i on q.id = i.id\n) b on a.id = b.p_id \nwhere a.category in (1, 2, 3);\n
\nAnd you'll get what you want:
\n+------+-------+----------+-------+-----------+\n| id | name | category | price | file_name |\n+------+-------+----------+-------+-----------+\n| 1 | item1 | 1 | 0.99 | image1 |\n| 2 | item2 | 2 | 1.99 | image3 |\n| 3 | item3 | 3 | 2.95 | image4 |\n+------+-------+----------+-------+-----------+\n3 rows in set (0.00 sec)\n
\nYou can also put the products in the right hand side of the left join, depending on what you expected when there is product without images available. The query above will display the view as above, with the file_name field as "null".
\nOn the other hand, it will not display any if you put products on the right hand side of hte left join.
\n
soup wrap:
This is the answer:
select a.id, a.name, a.category, a.price, b.filename as file_name
from products a left join (
select i.p_id, i.filename from (select id, min(priority) as min_p
from images group by p_id) q
left join images i on q.id = i.id
) b on a.id = b.p_id
where a.category in (1, 2, 3);
EXPLANATION:
First, you need to get a set where for each products with lowest priority, which is from this query:
select id, min(priority) as min_p from images group by p_id;
The result will be:
+----+----------+
| id | lowest_p |
+----+----------+
| 1 | 0 |
| 2 | 2 |
| 3 | 2 |
| 4 | 1 |
+----+----------+
4 rows in set (0.00 sec)
The next step will be to get an outer join, in this case I'd choose (arbitrarily according to my preference), the left join:
select i.p_id, i.filename from (select id, min(priority) as min_p
from images group by p_id) q left join images i on q.id = i.id;
This query produce what you want in short:
+------+----------+
| p_id | filename |
+------+----------+
| 1 | image1 |
| 2 | image3 |
| 3 | image4 |
| 4 | image7 |
+------+----------+
4 rows in set (0.00 sec)
Now you just need to decorate this, again using left join:
select a.id, a.name, a.category, a.price, b.filename as file_name
from products a left join (
select i.p_id, i.filename from (select id, min(priority) as min_p
from images group by p_id) q
left join images i on q.id = i.id
) b on a.id = b.p_id
where a.category in (1, 2, 3);
And you'll get what you want:
+------+-------+----------+-------+-----------+
| id | name | category | price | file_name |
+------+-------+----------+-------+-----------+
| 1 | item1 | 1 | 0.99 | image1 |
| 2 | item2 | 2 | 1.99 | image3 |
| 3 | item3 | 3 | 2.95 | image4 |
+------+-------+----------+-------+-----------+
3 rows in set (0.00 sec)
You can also put the products in the right hand side of the left join, depending on what you expected when there is product without images available. The query above will display the view as above, with the file_name field as "null".
On the other hand, it will not display any if you put products on the right hand side of hte left join.
qid & accept id:
(10670090, 10670126)
query:
SQL Query With Calculated MIN, Requesting Other Column Returns All Rows
soup:
You can use:
\nSELECT TOP 1 ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC\n
\nOr for MySQL:
\nSELECT ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC LIMIT 1\n
\n
soup wrap:
You can use:
SELECT TOP 1 ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC
Or for MySQL:
SELECT ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC LIMIT 1
qid & accept id:
(10694376, 10694502)
query:
SQL how to handle a many to many relationship
soup:
Try this:
\nCREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL, \nteamID INT NOT NULL,\nPRIMARY KEY(playerID, teamID)\n);\n\nalter table teamPlayer\nadd constraint \n fk_teamPlayer__Player foreign key(playerID) references Player(personID);\n\nalter table teamPlayer\nadd constraint \n fk_teamPlayer__Team foreign key(teamID) references Team(teamID);\n
\nOr this:
\nCREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL, \nteamID INT NOT NULL,\nPRIMARY KEY(playerID, teamID),\n\nconstraint fk_teamPlayer__Player\nforeign key(playerID) references Player(personID),\n\nconstraint fk_teamPlayer__Team \nforeign key(teamID) references Team(teamID)\n\n);\n
\nIf you don't need to name your foreign keys explicitly, you can use this:
\nCREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL references Player(personID), \nteamID INT NOT NULL references Team(teamID),\nPRIMARY KEY(playerID, teamID)\n);\n
\n
\nAll major RDBMS pretty much complied with ANSI SQL on relationship DDL. Everyone is identical
\nCREATE THEN ALTER(explicitly named foreign key):
\n\n- Postgresql: http://www.sqlfiddle.com/#!1/0a096
\n- MySQL: http://www.sqlfiddle.com/#!2/0a096
\n- Oracle: http://www.sqlfiddle.com/#!4/0a096
\n- SQL Server: http://www.sqlfiddle.com/#!3/0a096
\n
\nCREATE(explicitly named foreign key):
\n\n- Postgresql: http://www.sqlfiddle.com/#!1/46ebb
\n- MySQL: http://www.sqlfiddle.com/#!2/46ebb
\n- Oracle: http://www.sqlfiddle.com/#!4/46ebb
\n- SQL Server: http://www.sqlfiddle.com/#!3/46ebb
\n
\nCREATE(auto-named foreign key):
\n\n- Postgresql: http://www.sqlfiddle.com/#!1/82742
\n- MySQL: http://www.sqlfiddle.com/#!2/82742
\n- Oracle: http://www.sqlfiddle.com/#!4/82742
\n- Sql Server: http://www.sqlfiddle.com/#!3/82742
\n
\n
soup wrap:
Try this:
CREATE TABLE teamPlayer
(
playerID INT NOT NULL,
teamID INT NOT NULL,
PRIMARY KEY(playerID, teamID)
);
alter table teamPlayer
add constraint
fk_teamPlayer__Player foreign key(playerID) references Player(personID);
alter table teamPlayer
add constraint
fk_teamPlayer__Team foreign key(teamID) references Team(teamID);
Or this:
CREATE TABLE teamPlayer
(
playerID INT NOT NULL,
teamID INT NOT NULL,
PRIMARY KEY(playerID, teamID),
constraint fk_teamPlayer__Player
foreign key(playerID) references Player(personID),
constraint fk_teamPlayer__Team
foreign key(teamID) references Team(teamID)
);
If you don't need to name your foreign keys explicitly, you can use this:
CREATE TABLE teamPlayer
(
playerID INT NOT NULL references Player(personID),
teamID INT NOT NULL references Team(teamID),
PRIMARY KEY(playerID, teamID)
);
All major RDBMS pretty much complied with ANSI SQL on relationship DDL. Everyone is identical
CREATE THEN ALTER(explicitly named foreign key):
- Postgresql: http://www.sqlfiddle.com/#!1/0a096
- MySQL: http://www.sqlfiddle.com/#!2/0a096
- Oracle: http://www.sqlfiddle.com/#!4/0a096
- SQL Server: http://www.sqlfiddle.com/#!3/0a096
CREATE(explicitly named foreign key):
- Postgresql: http://www.sqlfiddle.com/#!1/46ebb
- MySQL: http://www.sqlfiddle.com/#!2/46ebb
- Oracle: http://www.sqlfiddle.com/#!4/46ebb
- SQL Server: http://www.sqlfiddle.com/#!3/46ebb
CREATE(auto-named foreign key):
- Postgresql: http://www.sqlfiddle.com/#!1/82742
- MySQL: http://www.sqlfiddle.com/#!2/82742
- Oracle: http://www.sqlfiddle.com/#!4/82742
- Sql Server: http://www.sqlfiddle.com/#!3/82742
qid & accept id:
(10777996, 10778075)
query:
SQL query when a table has a link to itself
soup:
You can use Common Table Expressions (CTEs) to solve this problem. CTEs can be used for recursion, as Andrei pointed out (see the excellent reference that Andrei included in his post). Let's say you have a table as follows:
\ncreate table Person\n(\n PersonId int primary key,\n Name varchar(25),\n ManagerId int foreign Key references Person(PersonId)\n)\n
\nand let's insert the following data into the table:
\ninsert into Person (PersonId, Name, ManagerId) values \n (1,'Bob', null),\n (2, 'Steve',1),\n (3, 'Tim', 2)\n (4, 'John', 3),\n (5, 'James', null),\n (6, 'Joe', 5)\n
\nthen we want a query that will return everyone who directly or indirectly reports to Bob, which would be Steve, Tim and John. We don't want to return James and Bob, since they report to no one, or Joe, since he reports to James. This can be done with a CTE query as follows:
\nWITH Managers AS \n( \n --initialize\n SELECT PersonId, Name, ManagerId \n FROM Person WHERE ManagerId =1\n UNION ALL \n --recursion \n SELECT p.PersonId, p.Name, p.ManagerId \n FROM Person p INNER JOIN Managers m \n ON p.ManagerId = m.PersonId \n) \nSELECT * FROM Managers\n
\nThis query returns the correct results:
\nPersonId Name ManagerId\n----------- ------------------------- -----------\n2 Steve 1\n3 Tim 2\n4 John 3\n
\nEdit: This answer is valid assuming the OP is using SQL Server 2005 or higher. I do not know if this syntax is valid in MySQL or Oracle.
\n
soup wrap:
You can use Common Table Expressions (CTEs) to solve this problem. CTEs can be used for recursion, as Andrei pointed out (see the excellent reference that Andrei included in his post). Let's say you have a table as follows:
create table Person
(
PersonId int primary key,
Name varchar(25),
ManagerId int foreign Key references Person(PersonId)
)
and let's insert the following data into the table:
insert into Person (PersonId, Name, ManagerId) values
(1,'Bob', null),
(2, 'Steve',1),
(3, 'Tim', 2)
(4, 'John', 3),
(5, 'James', null),
(6, 'Joe', 5)
then we want a query that will return everyone who directly or indirectly reports to Bob, which would be Steve, Tim and John. We don't want to return James and Bob, since they report to no one, or Joe, since he reports to James. This can be done with a CTE query as follows:
WITH Managers AS
(
--initialize
SELECT PersonId, Name, ManagerId
FROM Person WHERE ManagerId =1
UNION ALL
--recursion
SELECT p.PersonId, p.Name, p.ManagerId
FROM Person p INNER JOIN Managers m
ON p.ManagerId = m.PersonId
)
SELECT * FROM Managers
This query returns the correct results:
PersonId Name ManagerId
----------- ------------------------- -----------
2 Steve 1
3 Tim 2
4 John 3
Edit: This answer is valid assuming the OP is using SQL Server 2005 or higher. I do not know if this syntax is valid in MySQL or Oracle.
qid & accept id:
(10787043, 10788017)
query:
Returning a row if and only if a sibling row doesn't exist
soup:
SET search_path= 'tmp';\n\nDROP TABLE dogcat CASCADE;\nCREATE TABLE dogcat\n ( id serial NOT NULL\n , zname varchar\n , foo INTEGER\n , bar INTEGER\n , house_id INTEGER NOT NULL\n , PRIMARY KEY (zname,house_id)\n );\nINSERT INTO dogcat(zname,foo,bar,house_id) VALUES\n ('Cat',12,4,1)\n ,('Cat',9,4,2)\n ,('Dog',8,23,1)\n ,('Bird',9,54,1)\n ,('Bird',78,2,2)\n ,('Bird',29,32,3)\n ;\n-- Carthesian product of the {name,house_id} domains\nWITH cart AS (\n WITH beast AS (\n SELECT distinct zname AS zname\n FROM dogcat\n )\n , house AS (\n SELECT distinct house_id AS house_id\n FROM dogcat\n )\n SELECT beast.zname AS zname\n ,house.house_id AS house_id\n FROM beast , house\n )\nINSERT INTO dogcat(zname,house_id, foo,bar)\nSELECT ca.zname, ca.house_id\n ,fb.foo, fb.bar\nFROM cart ca\n -- find the animal with the lowes id\nJOIN dogcat fb ON fb.zname = ca.zname AND NOT EXISTS\n ( SELECT * FROM dogcat nx\n WHERE nx.zname = fb.zname\n AND nx.id < fb.id\n )\nWHERE NOT EXISTS (\n SELECT * FROM dogcat dc\n WHERE dc.zname = ca.zname\n AND dc.house_id = ca.house_id\n )\n ;\n\nSELECT * FROM dogcat;\n
\nResult:
\nSET\nDROP TABLE\nNOTICE: CREATE TABLE will create implicit sequence "dogcat_id_seq" for serial column "dogcat.id"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "dogcat_pkey" for table "dogcat"\nCREATE TABLE\nINSERT 0 6\nINSERT 0 3\n id | zname | foo | bar | house_id \n----+-------+-----+-----+----------\n 1 | Cat | 12 | 4 | 1\n 2 | Cat | 9 | 4 | 2\n 3 | Dog | 8 | 23 | 1\n 4 | Bird | 9 | 54 | 1\n 5 | Bird | 78 | 2 | 2\n 6 | Bird | 29 | 32 | 3\n 7 | Cat | 12 | 4 | 3\n 8 | Dog | 8 | 23 | 2\n 9 | Dog | 8 | 23 | 3\n(9 rows)\n
\n
soup wrap:
SET search_path= 'tmp';
DROP TABLE dogcat CASCADE;
CREATE TABLE dogcat
( id serial NOT NULL
, zname varchar
, foo INTEGER
, bar INTEGER
, house_id INTEGER NOT NULL
, PRIMARY KEY (zname,house_id)
);
INSERT INTO dogcat(zname,foo,bar,house_id) VALUES
('Cat',12,4,1)
,('Cat',9,4,2)
,('Dog',8,23,1)
,('Bird',9,54,1)
,('Bird',78,2,2)
,('Bird',29,32,3)
;
-- Carthesian product of the {name,house_id} domains
WITH cart AS (
WITH beast AS (
SELECT distinct zname AS zname
FROM dogcat
)
, house AS (
SELECT distinct house_id AS house_id
FROM dogcat
)
SELECT beast.zname AS zname
,house.house_id AS house_id
FROM beast , house
)
INSERT INTO dogcat(zname,house_id, foo,bar)
SELECT ca.zname, ca.house_id
,fb.foo, fb.bar
FROM cart ca
-- find the animal with the lowes id
JOIN dogcat fb ON fb.zname = ca.zname AND NOT EXISTS
( SELECT * FROM dogcat nx
WHERE nx.zname = fb.zname
AND nx.id < fb.id
)
WHERE NOT EXISTS (
SELECT * FROM dogcat dc
WHERE dc.zname = ca.zname
AND dc.house_id = ca.house_id
)
;
SELECT * FROM dogcat;
Result:
SET
DROP TABLE
NOTICE: CREATE TABLE will create implicit sequence "dogcat_id_seq" for serial column "dogcat.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "dogcat_pkey" for table "dogcat"
CREATE TABLE
INSERT 0 6
INSERT 0 3
id | zname | foo | bar | house_id
----+-------+-----+-----+----------
1 | Cat | 12 | 4 | 1
2 | Cat | 9 | 4 | 2
3 | Dog | 8 | 23 | 1
4 | Bird | 9 | 54 | 1
5 | Bird | 78 | 2 | 2
6 | Bird | 29 | 32 | 3
7 | Cat | 12 | 4 | 3
8 | Dog | 8 | 23 | 2
9 | Dog | 8 | 23 | 3
(9 rows)
qid & accept id:
(10797333, 10797805)
query:
How to Specify Array variable in plsql
soup:
There are a couple of different approaches you could take to get data into your array. The first would be a simple loop, as in the following:
\nDECLARE\n TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;\n\n arrNums NUMBER_ARRAY;\n i NUMBER := 1;\nBEGIN\n arrNums := NUMBER_ARRAY();\n\n FOR aRow IN (SELECT NUMBER_FIELD\n FROM A_TABLE\n WHERE ROWNUM <= 100)\n LOOP\n arrNums.EXTEND;\n arrNums(i) := aRow.SEQUENCE_NO;\n i := i + 1;\n END LOOP;\nend;\n
\nAnother, as suggested by @Rene, would be to use BULK COLLECT, as follows:
\nDECLARE\n TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;\n\n arrNums NUMBER_ARRAY;\nBEGIN\n arrNums := NUMBER_ARRAY();\n arrNums.EXTEND(100);\n\n SELECT NUMBER_FIELD\n BULK COLLECT INTO arrNums\n FROM A_TABLE\n WHERE ROWNUM <= 100;\nend;\n
\nShare and enjoy.
\n
soup wrap:
There are a couple of different approaches you could take to get data into your array. The first would be a simple loop, as in the following:
DECLARE
TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;
arrNums NUMBER_ARRAY;
i NUMBER := 1;
BEGIN
arrNums := NUMBER_ARRAY();
FOR aRow IN (SELECT NUMBER_FIELD
FROM A_TABLE
WHERE ROWNUM <= 100)
LOOP
arrNums.EXTEND;
arrNums(i) := aRow.SEQUENCE_NO;
i := i + 1;
END LOOP;
end;
Another, as suggested by @Rene, would be to use BULK COLLECT, as follows:
DECLARE
TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;
arrNums NUMBER_ARRAY;
BEGIN
arrNums := NUMBER_ARRAY();
arrNums.EXTEND(100);
SELECT NUMBER_FIELD
BULK COLLECT INTO arrNums
FROM A_TABLE
WHERE ROWNUM <= 100;
end;
Share and enjoy.
qid & accept id:
(10813098, 10813179)
query:
JOIN multiple fields to one field
soup:
You do it the same way you will do normally -
\nSELECT ABC.*, XYZ.* FROM XYZ, ABC\nWHERE \nXYZ.KOD_TYPE=ABC.REMARK1\nAND XYZ.KOD_TYPE=ABC.REMARK2\nAND XYZ.KOD_TYPE=ABC.REMARK3\nAND XYZ.KOD_TYPE=ABC.REMARK4\nAND XYZ.KOD_TYPE=ABC.REMARK5\n
\nIf you need query where any one remark matches -
\nSELECT ABC.*, XYZ.* FROM XYZ, ABC\nWHERE \nXYZ.KOD_TYPE=ABC.REMARK1\nOR XYZ.KOD_TYPE=ABC.REMARK2\nOR XYZ.KOD_TYPE=ABC.REMARK3\nOR XYZ.KOD_TYPE=ABC.REMARK4\nOR XYZ.KOD_TYPE=ABC.REMARK5\n
\n
soup wrap:
You do it the same way you will do normally -
SELECT ABC.*, XYZ.* FROM XYZ, ABC
WHERE
XYZ.KOD_TYPE=ABC.REMARK1
AND XYZ.KOD_TYPE=ABC.REMARK2
AND XYZ.KOD_TYPE=ABC.REMARK3
AND XYZ.KOD_TYPE=ABC.REMARK4
AND XYZ.KOD_TYPE=ABC.REMARK5
If you need query where any one remark matches -
SELECT ABC.*, XYZ.* FROM XYZ, ABC
WHERE
XYZ.KOD_TYPE=ABC.REMARK1
OR XYZ.KOD_TYPE=ABC.REMARK2
OR XYZ.KOD_TYPE=ABC.REMARK3
OR XYZ.KOD_TYPE=ABC.REMARK4
OR XYZ.KOD_TYPE=ABC.REMARK5
qid & accept id:
(10860452, 27342422)
query:
How to discover the columns for a given index or key in MonetDB
soup:
Two and a half years later, because I was intrigued by the question: You can indeed find the columns for a given key using the poorly named "objects" table.
\nFor example, consider the following table
\nCREATE TABLE indextest (a INT, b INT);\nALTER TABLE indextest ADD CONSTRAINT indextest_pk PRIMARY KEY (a);\nALTER TABLE indextest ADD CONSTRAINT indextest_uq UNIQUE (a, b); \n
\nNow let's find out which columns belong to indextest_uq:
\nSELECT idxs.id AS index_id, columns.id AS column_id, tables.name AS table_name, columns.name AS column_name, columns.type AS column_type \nFROM idxs JOIN objects ON idxs.id=objects.id JOIN tables ON idxs.table_id=tables.id JOIN columns ON idxs.table_id=columns.table_id AND objects.name=columns.name \nWHERE idxs.name='indextest_uq';\n
\nThe result of this query looks like this:
\n+----------+-----------+------------+-------------+-------------+\n| index_id | column_id | table_name | column_name | column_type |\n+==========+===========+============+=============+=============+\n| 6446 | 6438 | indextest | a | int |\n| 6446 | 6439 | indextest | b | int |\n+----------+-----------+------------+-------------+-------------+\n
\nObviously, more information from the columns and tables tables could be included by extending the SELECT part of the query.
\n
soup wrap:
Two and a half years later, because I was intrigued by the question: You can indeed find the columns for a given key using the poorly named "objects" table.
For example, consider the following table
CREATE TABLE indextest (a INT, b INT);
ALTER TABLE indextest ADD CONSTRAINT indextest_pk PRIMARY KEY (a);
ALTER TABLE indextest ADD CONSTRAINT indextest_uq UNIQUE (a, b);
Now let's find out which columns belong to indextest_uq:
SELECT idxs.id AS index_id, columns.id AS column_id, tables.name AS table_name, columns.name AS column_name, columns.type AS column_type
FROM idxs JOIN objects ON idxs.id=objects.id JOIN tables ON idxs.table_id=tables.id JOIN columns ON idxs.table_id=columns.table_id AND objects.name=columns.name
WHERE idxs.name='indextest_uq';
The result of this query looks like this:
+----------+-----------+------------+-------------+-------------+
| index_id | column_id | table_name | column_name | column_type |
+==========+===========+============+=============+=============+
| 6446 | 6438 | indextest | a | int |
| 6446 | 6439 | indextest | b | int |
+----------+-----------+------------+-------------+-------------+
Obviously, more information from the columns and tables tables could be included by extending the SELECT part of the query.
qid & accept id:
(10918093, 10920001)
query:
Changing part of a string on some values in postgres database
soup:
You need to use this Postgres function
\noverlay(string placing string from int [for int]) \nex: overlay('Txxxxas' placing 'hom' from 2 for 4)\n
\nYour situation involves the select statement having the following:
\noverlay(location placing '/home/BBB' from 1 for 9)\n
\nYou can get more information from here.
\n
soup wrap:
You need to use this Postgres function
overlay(string placing string from int [for int])
ex: overlay('Txxxxas' placing 'hom' from 2 for 4)
Your situation involves the select statement having the following:
overlay(location placing '/home/BBB' from 1 for 9)
You can get more information from here.
qid & accept id:
(10919401, 10919437)
query:
Select all data in sql with where condition?
soup:
The usual trick is to set a separate parameter for selecting everything:
\nSELECT book FROM com WHERE genre=? OR 1=?\n
\nWhen you set the second parameter to 0, filtering by genre is used, but when you set it to 1, everything is returned.
\nIf you are willing to switch to using named JDBC parameters, you could rewrite with one parameter, and use null to mean "select everything":
\nSELECT book FROM com WHERE genre=:genre_param OR :genre_param is null\n
\n
soup wrap:
The usual trick is to set a separate parameter for selecting everything:
SELECT book FROM com WHERE genre=? OR 1=?
When you set the second parameter to 0, filtering by genre is used, but when you set it to 1, everything is returned.
If you are willing to switch to using named JDBC parameters, you could rewrite with one parameter, and use null to mean "select everything":
SELECT book FROM com WHERE genre=:genre_param OR :genre_param is null
qid & accept id:
(10979035, 10979094)
query:
Single default value in a table
soup:
The easiest way I see is a check constraint with a UDF (User Defined function).
\nLook at here, for example.\nhttp://sqljourney.wordpress.com/2010/06/25/check-constraint-with-user-defined-function-in-sql-server/
\nUntested example
\nCREATE FUNCTION dbo.CheckDefaultUnicity(@UserId int)\nRETURNS int\nAS \nBEGIN\n DECLARE @retval int\n SELECT @retval = COUNT(*) FROM where UserId = @UserId and = 1-- or whatever is your default value\n RETURN @retval \nEND;\nGO\n
\nand alter your table
\nALTER TABLE \nADD CONSTRAINT Ck_UniqueDefaultForUser \nCHECK (dbo.CheckDefaultUnicity(UserId) <2)\n
\n
soup wrap:
The easiest way I see is a check constraint with a UDF (User Defined function).
Look at here, for example.
http://sqljourney.wordpress.com/2010/06/25/check-constraint-with-user-defined-function-in-sql-server/
Untested example
CREATE FUNCTION dbo.CheckDefaultUnicity(@UserId int)
RETURNS int
AS
BEGIN
DECLARE @retval int
SELECT @retval = COUNT(*) FROM where UserId = @UserId and = 1-- or whatever is your default value
RETURN @retval
END;
GO
and alter your table
ALTER TABLE
ADD CONSTRAINT Ck_UniqueDefaultForUser
CHECK (dbo.CheckDefaultUnicity(UserId) <2)
qid & accept id:
(10993189, 10993263)
query:
Oracle Regex expression to match exactly non digit then digits again
soup:
Just remove the .* at the end of your expression it is responsible for matching the additional stuff.
\nSELECT 1 FROM DUAL WHERE \n REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}$')\n
\nThat way it does match 3 or 4 digits, a non digit and 4 more digits.
\nThe {3,4} and {4} are the quantifiers that define the amount of digits you want to allow. Just change them to the values you need. E.g. {4,} would match 4 or more.
\n^ anchors the regex to the start of the string and $ to the end.
\nUpdate
\nTo ensure that there is a non digit after the 4 digits at the end you can use an alternation
\nSELECT 1 FROM DUAL WHERE \n REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}($|[^0-9].*$)')\n
\nNow, after your 4 digits there must be either the end of the row OR a non digit ([^0-9] is a negated character class), then anything (but newlines) till the end of the row.
\nI don't know if it is important in your case, but [^0-9] would also match a newline character, if you want to avoid this use [^0-9\r\n]
\n
soup wrap:
Just remove the .* at the end of your expression it is responsible for matching the additional stuff.
SELECT 1 FROM DUAL WHERE
REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}$')
That way it does match 3 or 4 digits, a non digit and 4 more digits.
The {3,4} and {4} are the quantifiers that define the amount of digits you want to allow. Just change them to the values you need. E.g. {4,} would match 4 or more.
^ anchors the regex to the start of the string and $ to the end.
Update
To ensure that there is a non digit after the 4 digits at the end you can use an alternation
SELECT 1 FROM DUAL WHERE
REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}($|[^0-9].*$)')
Now, after your 4 digits there must be either the end of the row OR a non digit ([^0-9] is a negated character class), then anything (but newlines) till the end of the row.
I don't know if it is important in your case, but [^0-9] would also match a newline character, if you want to avoid this use [^0-9\r\n]
qid & accept id:
(10993546, 10993655)
query:
Changing the column Type in SQL
soup:
My way of doing this:
\n(1) Add a new column:
\nALTER TABLE yourtable \nADD COLUMN `new_date` DATE NULL AFTER `views`; \n
\n(2) Update the new column
\nUPDATE yourtable SET new_date = old_date;\n
\nTake care of the datas formatting in old_date. If it isn't formatted yyyy-mm-dd, you might have to STR_TO_DATE or some string-replacements in this UPDATE-statement here to fit your purposes.
\nExample:
\nIf your data looks like this: mmmm dd, yyyy, hh:mm (p.e. May 17, 2012, 8:36 pm) , you can update like this:
\nUPDATE yourtable\nSET new_date = STR_TO_DATE(old_date, "%M %e, %Y");\n
\nSTR_TO_DATE basically reverse engineers string data to a date value.
\n(3) Delete the old column
\nALTER TABLE yourtable \nDROP COLUMN `old_date`; \n
\n(4) Rename the new column
\nALTER TABLE yourtable \nCHANGE `new_date` `old_date` DATE NULL; \n
\nDone!
\n
soup wrap:
My way of doing this:
(1) Add a new column:
ALTER TABLE yourtable
ADD COLUMN `new_date` DATE NULL AFTER `views`;
(2) Update the new column
UPDATE yourtable SET new_date = old_date;
Take care of the datas formatting in old_date. If it isn't formatted yyyy-mm-dd, you might have to STR_TO_DATE or some string-replacements in this UPDATE-statement here to fit your purposes.
Example:
If your data looks like this: mmmm dd, yyyy, hh:mm (p.e. May 17, 2012, 8:36 pm) , you can update like this:
UPDATE yourtable
SET new_date = STR_TO_DATE(old_date, "%M %e, %Y");
STR_TO_DATE basically reverse engineers string data to a date value.
(3) Delete the old column
ALTER TABLE yourtable
DROP COLUMN `old_date`;
(4) Rename the new column
ALTER TABLE yourtable
CHANGE `new_date` `old_date` DATE NULL;
Done!
qid & accept id:
(10999396, 10999467)
query:
How do I use an INSERT statement's OUTPUT clause to get the identity value?
soup:
You can either have the newly inserted ID being output to the SSMS console like this:
\nINSERT INTO MyTable(Name, Address, PhoneNo)\nOUTPUT INSERTED.ID\nVALUES ('Yatrix', '1234 Address Stuff', '1112223333')\n
\nYou can use this also from e.g. C#, when you need to get the ID back to your calling app - just execute the SQL query with .ExecuteScalar() (instead of .ExecuteNonQuery()) to read the resulting ID back.
\nOr if you need to capture the newly inserted ID inside T-SQL (e.g. for later further processing), you need to create a table variable:
\nDECLARE @OutputTbl TABLE (ID INT)\n\nINSERT INTO MyTable(Name, Address, PhoneNo)\nOUTPUT INSERTED.ID INTO @OutputTbl(ID)\nVALUES ('Yatrix', '1234 Address Stuff', '1112223333')\n
\nThis way, you can put multiple values into @OutputTbl and do further processing on those. You could also use a "regular" temporary table (#temp) or even a "real" persistent table as your "output target" here.
\n
soup wrap:
You can either have the newly inserted ID being output to the SSMS console like this:
INSERT INTO MyTable(Name, Address, PhoneNo)
OUTPUT INSERTED.ID
VALUES ('Yatrix', '1234 Address Stuff', '1112223333')
You can use this also from e.g. C#, when you need to get the ID back to your calling app - just execute the SQL query with .ExecuteScalar() (instead of .ExecuteNonQuery()) to read the resulting ID back.
Or if you need to capture the newly inserted ID inside T-SQL (e.g. for later further processing), you need to create a table variable:
DECLARE @OutputTbl TABLE (ID INT)
INSERT INTO MyTable(Name, Address, PhoneNo)
OUTPUT INSERTED.ID INTO @OutputTbl(ID)
VALUES ('Yatrix', '1234 Address Stuff', '1112223333')
This way, you can put multiple values into @OutputTbl and do further processing on those. You could also use a "regular" temporary table (#temp) or even a "real" persistent table as your "output target" here.
qid & accept id:
(11019847, 11020207)
query:
Database design pattern where one attribute only applies if another attribute has certain value(s)
soup:
I would call this a data dependency. Not all data dependencies can be modeled directly or conveniently with relational decomposition. This one can be handled pretty easily with a check constraint:
\nCREATE TABLE Students (\n id SERIAL PRIMARY KEY, -- for example, something else in reality\n grade INTEGER NOT NULL,\n honors BOOLEAN,\n CONSTRAINT ensure_honors_grade \n CHECK((honors IS NULL AND grade < 7) OR \n (honors IS NOT NULL AND grade >= 7))\n);\n
\nAnother solution might be to use two tables:
\nCREATE TABLE Students (\n id SERIAL PRIMARY KEY,\n grade INTEGER NOT NULL,\n CONSTRAINT id_grade_unique UNIQUE (id, grade) -- needed for FK constraint below\n);\n\nCREATE TABLE Honors (\n student_id INTEGER NOT NULL,\n grade INTEGER NOT NULL,\n honors BOOLEAN NOT NULL,\n CONSTRAINT student_fk FOREIGN KEY (student_id, grade) REFERENCES Students(id, grade),\n CONSTRAINT valid_grade CHECK(grade >= 7)\n);\n
\nThis alternative design is more explicit about the relationship between the grade and whether or not there is an honors flag, and leaves room for further differentiation of students in grades 7-8 (though the table name should be improved). If you only have the one property, the honors boolean, then this is probably overkill. As @BrankoDimitrijevic mentions, this doesn't enforce the existence of a row in Honors just because the grade is 7 or 8, and you're also paying for an index you wouldn't otherwise need. So there are tradeoffs; these are certainly not the only two designs possible; Branko also suggests using triggers.
\nWhen it comes to OO design, @Ryan is correct, but for proper relational database design one does not, in general, approach problems by trying to identify inheritance patterns. That is the OO perspective. It will always be important to concern yourself with your access patterns and how your code will be getting at the data, but in relational database design, one strives for normalization and flexibility in the database first and the code second, because there will invariably be multiple codebases getting at the data and you want to ensure the data is always valid no matter how buggy the accessing code is.
\n
soup wrap:
I would call this a data dependency. Not all data dependencies can be modeled directly or conveniently with relational decomposition. This one can be handled pretty easily with a check constraint:
CREATE TABLE Students (
id SERIAL PRIMARY KEY, -- for example, something else in reality
grade INTEGER NOT NULL,
honors BOOLEAN,
CONSTRAINT ensure_honors_grade
CHECK((honors IS NULL AND grade < 7) OR
(honors IS NOT NULL AND grade >= 7))
);
Another solution might be to use two tables:
CREATE TABLE Students (
id SERIAL PRIMARY KEY,
grade INTEGER NOT NULL,
CONSTRAINT id_grade_unique UNIQUE (id, grade) -- needed for FK constraint below
);
CREATE TABLE Honors (
student_id INTEGER NOT NULL,
grade INTEGER NOT NULL,
honors BOOLEAN NOT NULL,
CONSTRAINT student_fk FOREIGN KEY (student_id, grade) REFERENCES Students(id, grade),
CONSTRAINT valid_grade CHECK(grade >= 7)
);
This alternative design is more explicit about the relationship between the grade and whether or not there is an honors flag, and leaves room for further differentiation of students in grades 7-8 (though the table name should be improved). If you only have the one property, the honors boolean, then this is probably overkill. As @BrankoDimitrijevic mentions, this doesn't enforce the existence of a row in Honors just because the grade is 7 or 8, and you're also paying for an index you wouldn't otherwise need. So there are tradeoffs; these are certainly not the only two designs possible; Branko also suggests using triggers.
When it comes to OO design, @Ryan is correct, but for proper relational database design one does not, in general, approach problems by trying to identify inheritance patterns. That is the OO perspective. It will always be important to concern yourself with your access patterns and how your code will be getting at the data, but in relational database design, one strives for normalization and flexibility in the database first and the code second, because there will invariably be multiple codebases getting at the data and you want to ensure the data is always valid no matter how buggy the accessing code is.
qid & accept id:
(11033340, 11033391)
query:
How to find sum of multiple columns in a table in SQL Server 2005?
soup:
Easy:
\nSELECT \n Val1,\n Val2,\n Val3,\n (Val1 + Val2 + Val3) as 'Total'\nFROM Emp\n
\nor if you just want one row:
\nSELECT \n SUM(Val1) as 'Val1',\n SUM(Val2) as 'Val2',\n SUM(Val3) as 'Val3',\n (SUM(Val1) + SUM(Val2) + SUM(Val3)) as 'Total'\nFROM Emp\n
\n
soup wrap:
Easy:
SELECT
Val1,
Val2,
Val3,
(Val1 + Val2 + Val3) as 'Total'
FROM Emp
or if you just want one row:
SELECT
SUM(Val1) as 'Val1',
SUM(Val2) as 'Val2',
SUM(Val3) as 'Val3',
(SUM(Val1) + SUM(Val2) + SUM(Val3)) as 'Total'
FROM Emp
qid & accept id:
(11097839, 11098733)
query:
How to create a not null column in a view
soup:
You can't add a not null or check constraint to a view; see this and on the same page 'Restrictions on NOT NULL Constraints' and 'Restrictions on Check Constraints'. You can add a with check option (against a redundant where clause) to the view but that won't be marked as not null in the data dictionary.
\nThe only way I can think to get this effect is, if you're on 11g, to add the cast value as a virtual column on the table, and (if it's still needed) create the view against that:
\nALTER TABLE "MyTable" ADD "MyBDColumn" AS\n (CAST("MyColumn" AS BINARY_DOUBLE)) NOT NULL;\n\nCREATE OR REPLACE VIEW "MyView" AS\nSELECT\n "MyBDColumn" AS "MyColumn"\nFROM "MyTable";\n\ndesc "MyView"\n\n Name Null? Type\n ----------------------------------------- -------- ----------------------------\n MyColumn NOT NULL BINARY_DOUBLE\n
\n
\nSince you said in a comment on dba.se that this is for mocking something up, you could use a normal column and a trigger to simulate the virtual column:
\nCREATE TABLE "MyTable" \n(\n "MyColumn" NUMBER NOT NULL,\n "MyBDColumn" BINARY_DOUBLE NOT NULL\n);\n\nCREATE TRIGGER "MyTrigger" before update or insert on "MyTable"\nFOR EACH ROW\nBEGIN\n :new."MyBDColumn" := :new."MyColumn";\nEND;\n/\n\nCREATE VIEW "MyView" AS\nSELECT\n "MyBDColumn" AS "MyColumn"\nFROM "MyTable";\n\nINSERT INTO "MyTable" ("MyColumn") values (2);\n\nSELECT * FROM "MyView";\n\n MyColumn\n----------\n 2.0E+000\n
\nAnd desc "MyView" still gives:
\n Name Null? Type\n ----------------------------------------- -------- ----------------------------\n MyColumn NOT NULL BINARY_DOUBLE\n
\nAs Leigh mentioned (also on dba.se), if you did want to insert/update the view you could use an instead of trigger, with the VC or fake version.
\n
soup wrap:
You can't add a not null or check constraint to a view; see this and on the same page 'Restrictions on NOT NULL Constraints' and 'Restrictions on Check Constraints'. You can add a with check option (against a redundant where clause) to the view but that won't be marked as not null in the data dictionary.
The only way I can think to get this effect is, if you're on 11g, to add the cast value as a virtual column on the table, and (if it's still needed) create the view against that:
ALTER TABLE "MyTable" ADD "MyBDColumn" AS
(CAST("MyColumn" AS BINARY_DOUBLE)) NOT NULL;
CREATE OR REPLACE VIEW "MyView" AS
SELECT
"MyBDColumn" AS "MyColumn"
FROM "MyTable";
desc "MyView"
Name Null? Type
----------------------------------------- -------- ----------------------------
MyColumn NOT NULL BINARY_DOUBLE
Since you said in a comment on dba.se that this is for mocking something up, you could use a normal column and a trigger to simulate the virtual column:
CREATE TABLE "MyTable"
(
"MyColumn" NUMBER NOT NULL,
"MyBDColumn" BINARY_DOUBLE NOT NULL
);
CREATE TRIGGER "MyTrigger" before update or insert on "MyTable"
FOR EACH ROW
BEGIN
:new."MyBDColumn" := :new."MyColumn";
END;
/
CREATE VIEW "MyView" AS
SELECT
"MyBDColumn" AS "MyColumn"
FROM "MyTable";
INSERT INTO "MyTable" ("MyColumn") values (2);
SELECT * FROM "MyView";
MyColumn
----------
2.0E+000
And desc "MyView" still gives:
Name Null? Type
----------------------------------------- -------- ----------------------------
MyColumn NOT NULL BINARY_DOUBLE
As Leigh mentioned (also on dba.se), if you did want to insert/update the view you could use an instead of trigger, with the VC or fake version.
qid & accept id:
(11104819, 11104987)
query:
sql query: create a table by merging rows from an exisiting table as follows:
soup:
I assume your node1 and node2 are integer foreign keys linking to a node table, and the table you mention is an edge table?
\nAssuming the edge table has been created with something like:
\nCREATE TABLE edges( node1 INTEGER, node2 INTEGER, weight REAL );\n
\nHow about something like (assuming no self-arcs and for every link from a->b there is also a link from b->a):
\nCREATE TABLE newedges( node1 INTEGER, node2 INTEGER, weight1 REAL, weight2 REAL );\n\nINSERT INTO newedges\n SELECT e1.node1, e1.node2, e1.weight, e2.weight\n FROM edges AS e1 INNER JOIN edges AS e2\n ON e1.node1=e2.node2 AND e1.node2=e2.node1\n WHERE e1.node1 < e1.node2;\n
\nThe self-join collates forward and backwards edges, and the requirement that e1.node1 is less than e1.node2 ensures that you only see each collated edge once.
\nEdit in response to a request to fill in zeros for missing backwards edge:
\nFor missing backwards edges, you can do a LEFT JOIN and use a CASE statement to fill in the gaps with zeros:
\nINSERT INTO newedges\n SELECT\n e1.node1,\n e1.node2,\n e1.weight,\n CASE WHEN e2.weight IS NULL THEN 0.0 ELSE e2.weight END\n FROM edges AS e1 LEFT JOIN edges AS e2\n ON e1.node1=e2.node2 AND e1.node2=e2.node1\n WHERE e1.node1 < e1.node2;\n
\nHope that helps!
\n
soup wrap:
I assume your node1 and node2 are integer foreign keys linking to a node table, and the table you mention is an edge table?
Assuming the edge table has been created with something like:
CREATE TABLE edges( node1 INTEGER, node2 INTEGER, weight REAL );
How about something like (assuming no self-arcs and for every link from a->b there is also a link from b->a):
CREATE TABLE newedges( node1 INTEGER, node2 INTEGER, weight1 REAL, weight2 REAL );
INSERT INTO newedges
SELECT e1.node1, e1.node2, e1.weight, e2.weight
FROM edges AS e1 INNER JOIN edges AS e2
ON e1.node1=e2.node2 AND e1.node2=e2.node1
WHERE e1.node1 < e1.node2;
The self-join collates forward and backwards edges, and the requirement that e1.node1 is less than e1.node2 ensures that you only see each collated edge once.
Edit in response to a request to fill in zeros for missing backwards edge:
For missing backwards edges, you can do a LEFT JOIN and use a CASE statement to fill in the gaps with zeros:
INSERT INTO newedges
SELECT
e1.node1,
e1.node2,
e1.weight,
CASE WHEN e2.weight IS NULL THEN 0.0 ELSE e2.weight END
FROM edges AS e1 LEFT JOIN edges AS e2
ON e1.node1=e2.node2 AND e1.node2=e2.node1
WHERE e1.node1 < e1.node2;
Hope that helps!
qid & accept id:
(11114638, 11114673)
query:
How to cut a part of a string in MySQL?
soup:
You can use
\nselect substring_index(substring(mycol, instr(mycol, "=")+1), " ", 1)\n
\nto get the first token after the =.
\nThis returns 76767.
\n
\nThis works in two steps :
\nsubstring(mycol, instr(mycol, "=")+1)\n
\nreturns the string starting after the =
\nand
\nsubstring_index( xxx , " ", 1)\n
\nget the first element of the virtual array you'd got from a split by " ", and so returns the first token of xxx.
\n
soup wrap:
You can use
select substring_index(substring(mycol, instr(mycol, "=")+1), " ", 1)
to get the first token after the =.
This returns 76767.
This works in two steps :
substring(mycol, instr(mycol, "=")+1)
returns the string starting after the =
and
substring_index( xxx , " ", 1)
get the first element of the virtual array you'd got from a split by " ", and so returns the first token of xxx.
qid & accept id:
(11116129, 11116361)
query:
Alter All Column Values using TRIM in SQL
soup:
If the datatype of name column is varchar then don't need to use rtrim function the right side spaces will be automatically trim. use only LTRIM only.
\nupdate tablename\nset name = ltrim(name)\nwhere ;\n
\nRun this see the how it trims the right spaces automatically.
\nDECLARE @mytb table\n(\nname varchar(20)\n);\n\nINSERT INTO @mytb VALUES (' stackoverflow ');\n\nSELECT len(name) from @mytb;\n\nSELECT ltrim(name),len(ltrim(name)) from @mytb;\n
\n
soup wrap:
If the datatype of name column is varchar then don't need to use rtrim function the right side spaces will be automatically trim. use only LTRIM only.
update tablename
set name = ltrim(name)
where ;
Run this see the how it trims the right spaces automatically.
DECLARE @mytb table
(
name varchar(20)
);
INSERT INTO @mytb VALUES (' stackoverflow ');
SELECT len(name) from @mytb;
SELECT ltrim(name),len(ltrim(name)) from @mytb;
qid & accept id:
(11117622, 11120820)
query:
Select all subsets in a many-to-many relation
soup:
DROP SCHEMA tmp CASCADE;\nCREATE SCHEMA tmp;\n\nSET search_path='tmp';\n\n\nCREATE TABLE instrument\n ( id INTEGER NOT NULL PRIMARY KEY\n , zname varchar\n );\nINSERT INTO instrument(id, zname) VALUES\n(1, 'instrument_1'), (2, 'instrument_2')\n, (3, 'instrument_3'), (4, 'instrument_4');\n\nCREATE TABLE piece\n ( id INTEGER NOT NULL PRIMARY KEY\n , zname varchar\n );\nINSERT INTO piece(id, zname) VALUES\n(1, 'piece_1'), (2, 'piece_2'), (3, 'piece_3'), (4, 'piece_4');\n\nCREATE TABLE has_part\n ( piece_id INTEGER NOT NULL\n , instrument_id INTEGER NOT NULL\n , PRIMARY KEY (piece_id,instrument_id)\n );\n\nINSERT INTO has_part(piece_id,instrument_id) VALUES\n(1,1), (1,2), (1,3)\n, (2,1), (2,2), (2,3), (2,4)\n, (3,1), (3,3), (3,4)\n, (4,2)\n ;\n
\nThe pure sql (not the double negation NOT EXISTS , NOT IN():
\nSELECT zname\nFROM piece pp\nWHERE NOT EXISTS (\n SELECT * FROM has_part nx\n WHERE nx.piece_id = pp.id\n AND nx.instrument_id NOT IN (1,2,3)\n )\n ;\n
\n
soup wrap:
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp;
SET search_path='tmp';
CREATE TABLE instrument
( id INTEGER NOT NULL PRIMARY KEY
, zname varchar
);
INSERT INTO instrument(id, zname) VALUES
(1, 'instrument_1'), (2, 'instrument_2')
, (3, 'instrument_3'), (4, 'instrument_4');
CREATE TABLE piece
( id INTEGER NOT NULL PRIMARY KEY
, zname varchar
);
INSERT INTO piece(id, zname) VALUES
(1, 'piece_1'), (2, 'piece_2'), (3, 'piece_3'), (4, 'piece_4');
CREATE TABLE has_part
( piece_id INTEGER NOT NULL
, instrument_id INTEGER NOT NULL
, PRIMARY KEY (piece_id,instrument_id)
);
INSERT INTO has_part(piece_id,instrument_id) VALUES
(1,1), (1,2), (1,3)
, (2,1), (2,2), (2,3), (2,4)
, (3,1), (3,3), (3,4)
, (4,2)
;
The pure sql (not the double negation NOT EXISTS , NOT IN():
SELECT zname
FROM piece pp
WHERE NOT EXISTS (
SELECT * FROM has_part nx
WHERE nx.piece_id = pp.id
AND nx.instrument_id NOT IN (1,2,3)
)
;
qid & accept id:
(11119197, 11119946)
query:
sql server table peak time
soup:
I've had a play around - I'm working with sessions with a recorded start and end datetime2 values, but hopefully you can adapt your current data to conform to this:
\nSample data (if I've got the answer wrong, maybe you can adopt this, add it to your question, and add more samples and expected outputs):
\ncreate table #Sessions (\n --We'll treat this as a semi-open interval - the session was "live" at SessionStart, and "dead" at SessionEnd\n SessionStart datetime2 not null,\n SessionEnd datetime2 null\n)\ninsert into #Sessions (SessionStart,SessionEnd) values\n('20120101','20120105'),\n('20120103','20120109'),\n('20120107','20120108')\n
\nAnd the query:
\n--Logically, the highest number of simultaneous users was reached at some point when a session started\n;with StartTimes as (\n select distinct SessionStart as Instant from #Sessions\n), Overlaps as (\n select\n st.Instant,COUNT(*) as Cnt,MIN(s.SessionEnd) as SessionEnd\n from\n StartTimes st\n inner join\n #Sessions s\n on\n st.Instant >= s.SessionStart and\n st.Instant < s.SessionEnd\n group by\n st.Instant\n), RankedOverlaps as (\n select Instant as SessionStart,Cnt,SessionEnd,RANK() OVER (ORDER BY Cnt desc) as rnk\n from Overlaps\n)\nselect * from RankedOverlaps where rnk = 1\n\ndrop table #Sessions\n
\nWhich, with my sample data gives:
\nSessionStart Cnt SessionEnd rnk\n---------------------- ----------- ---------------------- --------------------\n2012-01-03 00:00:00.00 2 2012-01-05 00:00:00.00 1\n2012-01-07 00:00:00.00 2 2012-01-08 00:00:00.00 1\n
\n
\nAn alternative approach, still using the above, but if you also want to analyze "not quite peak" values also, is as follows:
\n--An alternate approach - arrange all of the distinct time values from Sessions into order\n;with Instants as (\n select SessionStart as Instant from #Sessions\n union --We want distinct here\n select SessionEnd from #Sessions\n), OrderedInstants as (\n select Instant,ROW_NUMBER() OVER (ORDER BY Instant) as rn\n from Instants\n), Intervals as (\n select oi1.Instant as StartTime,oi2.Instant as EndTime\n from\n OrderedInstants oi1\n inner join\n OrderedInstants oi2\n on\n oi1.rn = oi2.rn - 1\n), IntervalOverlaps as (\n select\n StartTime,\n EndTime,\n COUNT(*) as Cnt\n from\n Intervals i\n inner join\n #Sessions s\n on\n i.StartTime < s.SessionEnd and\n s.SessionStart < i.EndTime\n group by\n StartTime,\n EndTime\n)\nselect * from IntervalOverlaps order by Cnt desc,StartTime\n
\nThis time, I'm outputting all of the time periods, together with the number of simultaneous users at the time (order from highest to lowest):
\nStartTime EndTime Cnt\n---------------------- ---------------------- -----------\n2012-01-03 00:00:00.00 2012-01-05 00:00:00.00 2\n2012-01-07 00:00:00.00 2012-01-08 00:00:00.00 2\n2012-01-01 00:00:00.00 2012-01-03 00:00:00.00 1\n2012-01-05 00:00:00.00 2012-01-07 00:00:00.00 1\n2012-01-08 00:00:00.00 2012-01-09 00:00:00.00 1\n
\n
soup wrap:
I've had a play around - I'm working with sessions with a recorded start and end datetime2 values, but hopefully you can adapt your current data to conform to this:
Sample data (if I've got the answer wrong, maybe you can adopt this, add it to your question, and add more samples and expected outputs):
create table #Sessions (
--We'll treat this as a semi-open interval - the session was "live" at SessionStart, and "dead" at SessionEnd
SessionStart datetime2 not null,
SessionEnd datetime2 null
)
insert into #Sessions (SessionStart,SessionEnd) values
('20120101','20120105'),
('20120103','20120109'),
('20120107','20120108')
And the query:
--Logically, the highest number of simultaneous users was reached at some point when a session started
;with StartTimes as (
select distinct SessionStart as Instant from #Sessions
), Overlaps as (
select
st.Instant,COUNT(*) as Cnt,MIN(s.SessionEnd) as SessionEnd
from
StartTimes st
inner join
#Sessions s
on
st.Instant >= s.SessionStart and
st.Instant < s.SessionEnd
group by
st.Instant
), RankedOverlaps as (
select Instant as SessionStart,Cnt,SessionEnd,RANK() OVER (ORDER BY Cnt desc) as rnk
from Overlaps
)
select * from RankedOverlaps where rnk = 1
drop table #Sessions
Which, with my sample data gives:
SessionStart Cnt SessionEnd rnk
---------------------- ----------- ---------------------- --------------------
2012-01-03 00:00:00.00 2 2012-01-05 00:00:00.00 1
2012-01-07 00:00:00.00 2 2012-01-08 00:00:00.00 1
An alternative approach, still using the above, but if you also want to analyze "not quite peak" values also, is as follows:
--An alternate approach - arrange all of the distinct time values from Sessions into order
;with Instants as (
select SessionStart as Instant from #Sessions
union --We want distinct here
select SessionEnd from #Sessions
), OrderedInstants as (
select Instant,ROW_NUMBER() OVER (ORDER BY Instant) as rn
from Instants
), Intervals as (
select oi1.Instant as StartTime,oi2.Instant as EndTime
from
OrderedInstants oi1
inner join
OrderedInstants oi2
on
oi1.rn = oi2.rn - 1
), IntervalOverlaps as (
select
StartTime,
EndTime,
COUNT(*) as Cnt
from
Intervals i
inner join
#Sessions s
on
i.StartTime < s.SessionEnd and
s.SessionStart < i.EndTime
group by
StartTime,
EndTime
)
select * from IntervalOverlaps order by Cnt desc,StartTime
This time, I'm outputting all of the time periods, together with the number of simultaneous users at the time (order from highest to lowest):
StartTime EndTime Cnt
---------------------- ---------------------- -----------
2012-01-03 00:00:00.00 2012-01-05 00:00:00.00 2
2012-01-07 00:00:00.00 2012-01-08 00:00:00.00 2
2012-01-01 00:00:00.00 2012-01-03 00:00:00.00 1
2012-01-05 00:00:00.00 2012-01-07 00:00:00.00 1
2012-01-08 00:00:00.00 2012-01-09 00:00:00.00 1
qid & accept id:
(11135522, 11135672)
query:
The best way to select the latest rates for several currency codes from the DB
soup:
Assuming that the latest exchange rate is the one with the highest id you can use:
\nSELECT *\nFROM rates r\nWHERE r.id IN (\n SELECT MAX(r1.id)\n FROM rates r1\n GROUP BY r1.currency_code\n) T;\n
\nBut I strongly suggest another pattern I love. I explained it in another answer this morning:
\nSELECT\n c.*,\n r1.*\nFROM currency c\nINNER JOIN rates r1 ON c.code = r1.currency_code\nLEFT JOIN rates r2 ON r1.currency_code = r2.currency_code AND r2.id > r1.id\nWHERE r2.id IS NULL;\n
\n
soup wrap:
Assuming that the latest exchange rate is the one with the highest id you can use:
SELECT *
FROM rates r
WHERE r.id IN (
SELECT MAX(r1.id)
FROM rates r1
GROUP BY r1.currency_code
) T;
But I strongly suggest another pattern I love. I explained it in another answer this morning:
SELECT
c.*,
r1.*
FROM currency c
INNER JOIN rates r1 ON c.code = r1.currency_code
LEFT JOIN rates r2 ON r1.currency_code = r2.currency_code AND r2.id > r1.id
WHERE r2.id IS NULL;
qid & accept id:
(11168749, 11168830)
query:
How To Find First Date of All MOnths In A Year
soup:
Please try the following. You may want to tweak the date format/timezone
\nselect to_date('2012/'||l||'/01', 'yyyy/mm/dd') \nfrom (select level l from dual connect by level < 13)\n
\nEDIT: As provided by the op in the comments, the current year needs to be taken rather than hardcoding it. The updated query is
\nSELECT L || '/01/' || TO_CHAR (SYSDATE, 'YYYY') DATESS FROM \n(SELECT LEVEL L FROM DUAL CONNECT BY LEVEL < 13)\n
\n
soup wrap:
Please try the following. You may want to tweak the date format/timezone
select to_date('2012/'||l||'/01', 'yyyy/mm/dd')
from (select level l from dual connect by level < 13)
EDIT: As provided by the op in the comments, the current year needs to be taken rather than hardcoding it. The updated query is
SELECT L || '/01/' || TO_CHAR (SYSDATE, 'YYYY') DATESS FROM
(SELECT LEVEL L FROM DUAL CONNECT BY LEVEL < 13)
qid & accept id:
(11215684, 11215700)
query:
Find all but allowed characters in column
soup:
\nit's supposed to pull all rows that do not contain characters we do not want.
\n
\nTo find rows that contain x you can use LIKE:
\nSELECT * FROM yourtable WHERE col LIKE '%x%'\n
\nTo find rows that do not contain x you can use NOT LIKE:
\nSELECT * FROM yourtable WHERE col NOT LIKE '%x%'\n
\nSo your query should use NOT LIKE because you want rows that don't contain something:
\nSELECT NID FROM NOTES WHERE NOTE NOT LIKE '%[0-9a-zA-Z#.;:/^\(\)\@\ \ \\\-]%'\n
\n
\n\nThat should return any rows that do not contain
\n0-9 a-z A-z . : ; ^ & @ \ / ( ) #\n
\n
\nNo. Because of the ^ at the start, it returns the rows that don't contain characters except those. Those characters you listed are the characters that are allowed.
\n
soup wrap:
it's supposed to pull all rows that do not contain characters we do not want.
To find rows that contain x you can use LIKE:
SELECT * FROM yourtable WHERE col LIKE '%x%'
To find rows that do not contain x you can use NOT LIKE:
SELECT * FROM yourtable WHERE col NOT LIKE '%x%'
So your query should use NOT LIKE because you want rows that don't contain something:
SELECT NID FROM NOTES WHERE NOTE NOT LIKE '%[0-9a-zA-Z#.;:/^\(\)\@\ \ \\\-]%'
That should return any rows that do not contain
0-9 a-z A-z . : ; ^ & @ \ / ( ) #
No. Because of the ^ at the start, it returns the rows that don't contain characters except those. Those characters you listed are the characters that are allowed.
qid & accept id:
(11227924, 13309814)
query:
PIVOT on hierarchical data
soup:
You can use PIVOT, UNPIVOT and a recursive query to perform this.
\nStatic Version, is where you hard-code the values to the transformed:
\n;with hd (id, name, parentid, category)\nas\n(\n select id, name, parentid, 1 as category\n from yourtable\n where parentid is null\n union all\n select t1.id, t1.name, t1.parentid, hd.category +1\n from yourtable t1\n inner join hd\n on t1.parentid = hd.id\n),\nunpiv as\n(\n select value, 'cat_'+cast(category as varchar(5))+'_'+ col col_name\n from\n (\n select cast(id as varchar(17)) id, name, parentid, category\n from hd\n ) src\n unpivot\n (\n value for col in (id, name)\n ) un\n)\nselect [cat_1_id], [cat_1_name],\n [cat_2_id], [cat_2_name],\n [cat_3_id], [cat_3_name]\nfrom unpiv\npivot\n(\n max(value)\n for col_name in ([cat_1_id], [cat_1_name],\n [cat_2_id], [cat_2_name],\n [cat_3_id], [cat_3_name])\n) piv\n
\n\nDynamic Version, the values are generated at run-time:
\n;with hd (id, name, parentid, category)\nas\n(\n select id, name, parentid, 1 as category\n from yourtable\n where parentid is null\n union all\n select t1.id, t1.name, t1.parentid, hd.category +1\n from yourtable t1\n inner join hd\n on t1.parentid = hd.id\n)\nselect category categoryNumber\ninto #temp\nfrom hd\n\nDECLARE @cols AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + quotename('cat_'+cast(CATEGORYNUMBER as varchar(10))+'_'+col) \n from #temp\n cross apply (select 'id' col\n union all \n select 'name' col) src\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nset @query = ';with hd (id, name, parentid, category)\n as\n (\n select id, name, parentid, 1 as category\n from yourtable\n where parentid is null\n union all\n select t1.id, t1.name, t1.parentid, hd.category +1\n from yourtable t1\n inner join hd\n on t1.parentid = hd.id\n ),\n unpiv as\n (\n select value, ''cat_''+cast(category as varchar(5))+''_''+ col col_name\n from\n (\n select cast(id as varchar(17)) id, name, parentid, category \n from hd\n ) src\n unpivot\n (\n value for col in (id, name)\n ) un\n )\n select '+@cols+'\n from unpiv\n pivot\n (\n max(value)\n for col_name in ('+@cols+')\n ) piv'\n\nexecute(@query)\n\ndrop table #temp\n
\n\nThe Results are the same for both:
\n| CAT_1_ID | CAT_1_NAME | CAT_2_ID | CAT_2_NAME | CAT_3_ID | CAT_3_NAME |\n--------------------------------------------------------------------------------\n| 1 | Decorating | 2 | Paint and Brushes | 5 | Rollers |\n
\n
soup wrap:
You can use PIVOT, UNPIVOT and a recursive query to perform this.
Static Version, is where you hard-code the values to the transformed:
;with hd (id, name, parentid, category)
as
(
select id, name, parentid, 1 as category
from yourtable
where parentid is null
union all
select t1.id, t1.name, t1.parentid, hd.category +1
from yourtable t1
inner join hd
on t1.parentid = hd.id
),
unpiv as
(
select value, 'cat_'+cast(category as varchar(5))+'_'+ col col_name
from
(
select cast(id as varchar(17)) id, name, parentid, category
from hd
) src
unpivot
(
value for col in (id, name)
) un
)
select [cat_1_id], [cat_1_name],
[cat_2_id], [cat_2_name],
[cat_3_id], [cat_3_name]
from unpiv
pivot
(
max(value)
for col_name in ([cat_1_id], [cat_1_name],
[cat_2_id], [cat_2_name],
[cat_3_id], [cat_3_name])
) piv
Dynamic Version, the values are generated at run-time:
;with hd (id, name, parentid, category)
as
(
select id, name, parentid, 1 as category
from yourtable
where parentid is null
union all
select t1.id, t1.name, t1.parentid, hd.category +1
from yourtable t1
inner join hd
on t1.parentid = hd.id
)
select category categoryNumber
into #temp
from hd
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + quotename('cat_'+cast(CATEGORYNUMBER as varchar(10))+'_'+col)
from #temp
cross apply (select 'id' col
union all
select 'name' col) src
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = ';with hd (id, name, parentid, category)
as
(
select id, name, parentid, 1 as category
from yourtable
where parentid is null
union all
select t1.id, t1.name, t1.parentid, hd.category +1
from yourtable t1
inner join hd
on t1.parentid = hd.id
),
unpiv as
(
select value, ''cat_''+cast(category as varchar(5))+''_''+ col col_name
from
(
select cast(id as varchar(17)) id, name, parentid, category
from hd
) src
unpivot
(
value for col in (id, name)
) un
)
select '+@cols+'
from unpiv
pivot
(
max(value)
for col_name in ('+@cols+')
) piv'
execute(@query)
drop table #temp
The Results are the same for both:
| CAT_1_ID | CAT_1_NAME | CAT_2_ID | CAT_2_NAME | CAT_3_ID | CAT_3_NAME |
--------------------------------------------------------------------------------
| 1 | Decorating | 2 | Paint and Brushes | 5 | Rollers |
qid & accept id:
(11260900, 11260933)
query:
Making a query that only shows unique records
soup:
If you need only emailAddress it is quite simple:
\nselect distinct emailAddress from \n
\nEdited according to request in comments.
\nIf you want to choose both distinct emailAddress and ANY customerName related to it then you must somehow tell SQL how to choose the customerName. The easiest way is to select i.e. MIN(customerName), then all other (usually those that are later in alphabet but it actually depends on collation) are discarded. Query would be:
\nselect emailAddress, min(customerName) as pickedCustomerName\nfrom \ngroup by emailAddress\n
\n
soup wrap:
If you need only emailAddress it is quite simple:
select distinct emailAddress from
Edited according to request in comments.
If you want to choose both distinct emailAddress and ANY customerName related to it then you must somehow tell SQL how to choose the customerName. The easiest way is to select i.e. MIN(customerName), then all other (usually those that are later in alphabet but it actually depends on collation) are discarded. Query would be:
select emailAddress, min(customerName) as pickedCustomerName
from
group by emailAddress
qid & accept id:
(11282433, 11282492)
query:
Minus Query in MsAccess
soup:
One possibility is NOT IN. There is no such thing as a minus query in MS Access.
\nselect h.* from hello h\nWHERE uniqueid NOT IN\n(select uniqueid from hello1 h1)\n
\nFor a purely sql solution, you need, say:
\nSELECT t.* FROM Table t\nLEFT JOIN NewTable n\nON t.ID = n.ID\nWHERE t.Field1 & "" <> n.Field1 & ""\n OR t.Field2 & "" <> n.Field2 & ""\n
\nHowever, it is easier using VBA.
\n
soup wrap:
One possibility is NOT IN. There is no such thing as a minus query in MS Access.
select h.* from hello h
WHERE uniqueid NOT IN
(select uniqueid from hello1 h1)
For a purely sql solution, you need, say:
SELECT t.* FROM Table t
LEFT JOIN NewTable n
ON t.ID = n.ID
WHERE t.Field1 & "" <> n.Field1 & ""
OR t.Field2 & "" <> n.Field2 & ""
However, it is easier using VBA.
qid & accept id:
(11292524, 11293632)
query:
How can get null column after UNPIVOT?
soup:
Have you tried using COALESCE or ISNULL?
\ne.g.
\nISNULL(AVG(column_1), 0) as column_1, \n
\nThis does mean that you will get 0 as the result instead of 'NULL' though - do you need null when they are all NULL?
\nEdit:
\nAlso, is there any need for an unpivot? Since you are specifying all 3 columns, why not just do:
\nSELECT BankID, (column_1 + column_2 + column_3) / 3 FROM partstat\nWHERE bankid = 4\n
\nThis gives you the same results but with the NULL
\nOf course this is assuming you have 1 row per bankid
\nEdit:
\nUNPIVOT isn't supposed to be used like this as far as I can see - I'd unpivot first then try the AVG... let me have a go...
\nEdit:
\nAh I take that back, it is just a problem with NULLs - other posts suggest ISNULL or COALESCE to eliminate the nulls, you could use a placeholder value like -1 which could work e.g.
\nSELECT bankid, AVG(CASE WHEN value = -1 THEN NULL ELSE value END) AS Average \nFROM ( \n SELECT bankid, \n isnull(AVG(column_1), -1) as column_1 ,\n AVG(Column_2) as column_2 ,\n Avg(column_3) as column_3 \n FROM data \n group by bankid\n) as pvt \nUNPIVOT (Value FOR o in (column_1, column_2, column_3)) as u\nGROUP BY bankid \n
\nYou need to ensure this will work though as if you have a value in column2/3 then column_1 will no longer = -1. It might be worth doing a case to see if they are all NULL in which case replacing the 1st null with -1
\n
soup wrap:
Have you tried using COALESCE or ISNULL?
e.g.
ISNULL(AVG(column_1), 0) as column_1,
This does mean that you will get 0 as the result instead of 'NULL' though - do you need null when they are all NULL?
Edit:
Also, is there any need for an unpivot? Since you are specifying all 3 columns, why not just do:
SELECT BankID, (column_1 + column_2 + column_3) / 3 FROM partstat
WHERE bankid = 4
This gives you the same results but with the NULL
Of course this is assuming you have 1 row per bankid
Edit:
UNPIVOT isn't supposed to be used like this as far as I can see - I'd unpivot first then try the AVG... let me have a go...
Edit:
Ah I take that back, it is just a problem with NULLs - other posts suggest ISNULL or COALESCE to eliminate the nulls, you could use a placeholder value like -1 which could work e.g.
SELECT bankid, AVG(CASE WHEN value = -1 THEN NULL ELSE value END) AS Average
FROM (
SELECT bankid,
isnull(AVG(column_1), -1) as column_1 ,
AVG(Column_2) as column_2 ,
Avg(column_3) as column_3
FROM data
group by bankid
) as pvt
UNPIVOT (Value FOR o in (column_1, column_2, column_3)) as u
GROUP BY bankid
You need to ensure this will work though as if you have a value in column2/3 then column_1 will no longer = -1. It might be worth doing a case to see if they are all NULL in which case replacing the 1st null with -1
qid & accept id:
(11307344, 11307526)
query:
How to check verify that SQL query was ran in transaction?
soup:
There's a transaction section in the output of:
\nSHOW ENGINE INNODB STATUS\G\n
\nWhich looks like (that's from my local MySQL currently not running any queries):
\nTRANSACTIONS\n------------\nTrx id counter 900\nPurge done for trx's n:o < 0 undo n:o < 0\nHistory list length 0\nLIST OF TRANSACTIONS FOR EACH SESSION:\n---TRANSACTION 0, not started\nMySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root\nSHOW ENGINE INNODB STATUS\n
\nI don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...
\nIn addition MySQL has command counters. This counters can be accessed via:
\nSHOW GLOBAL STATUS LIKE "COM\_%"
\nEach execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.
\n
soup wrap:
There's a transaction section in the output of:
SHOW ENGINE INNODB STATUS\G
Which looks like (that's from my local MySQL currently not running any queries):
TRANSACTIONS
------------
Trx id counter 900
Purge done for trx's n:o < 0 undo n:o < 0
History list length 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root
SHOW ENGINE INNODB STATUS
I don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...
In addition MySQL has command counters. This counters can be accessed via:
SHOW GLOBAL STATUS LIKE "COM\_%"
Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.
qid & accept id:
(11308438, 11309255)
query:
MYSQL auto_increment_increment
soup:
Updated version: only a single id field is used. This is very probably not atomic, so use inside a transaction if you need concurrency:
\nhttp://sqlfiddle.com/#!2/a4ed8/1
\nCREATE TABLE IF NOT EXISTS person (\n id INT NOT NULL AUTO_INCREMENT,\n PRIMARY KEY ( id )\n) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;\n\nCREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN\n DECLARE newid INT;\n\n SET newid = (SELECT AUTO_INCREMENT\n FROM information_schema.TABLES\n WHERE TABLE_SCHEMA = DATABASE()\n AND TABLE_NAME = 'person'\n );\n\n IF NEW.id AND NEW.id >= newid THEN\n SET newid = NEW.id;\n END IF;\n\n SET NEW.id = 5 * CEILING( newid / 5 );\nEND;\n
\nOld, non working "solution" (the before insert trigger can't see the current auto increment value):
\nhttp://sqlfiddle.com/#!2/f4f9a/1
\nCREATE TABLE IF NOT EXISTS person (\n secretid INT NOT NULL AUTO_INCREMENT,\n id INT NOT NULL,\n PRIMARY KEY ( secretid )\n) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;\n\nCREATE TRIGGER update_kangaroo_id BEFORE UPDATE ON person FOR EACH ROW BEGIN\n SET NEW.id = NEW.secretid * 5;\nEND;\n\nCREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN\n SET NEW.id = NEW.secretid * 5; -- NEW.secretid is empty = unusuable!\nEND;\n
\n
soup wrap:
Updated version: only a single id field is used. This is very probably not atomic, so use inside a transaction if you need concurrency:
http://sqlfiddle.com/#!2/a4ed8/1
CREATE TABLE IF NOT EXISTS person (
id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY ( id )
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
CREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN
DECLARE newid INT;
SET newid = (SELECT AUTO_INCREMENT
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_NAME = 'person'
);
IF NEW.id AND NEW.id >= newid THEN
SET newid = NEW.id;
END IF;
SET NEW.id = 5 * CEILING( newid / 5 );
END;
Old, non working "solution" (the before insert trigger can't see the current auto increment value):
http://sqlfiddle.com/#!2/f4f9a/1
CREATE TABLE IF NOT EXISTS person (
secretid INT NOT NULL AUTO_INCREMENT,
id INT NOT NULL,
PRIMARY KEY ( secretid )
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
CREATE TRIGGER update_kangaroo_id BEFORE UPDATE ON person FOR EACH ROW BEGIN
SET NEW.id = NEW.secretid * 5;
END;
CREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN
SET NEW.id = NEW.secretid * 5; -- NEW.secretid is empty = unusuable!
END;
qid & accept id:
(11329936, 11330295)
query:
Calculate Percentages In Query - Access SQL
soup:
Your subquery has no where clause and thus counts all records, but you can do it without subquery
\nSELECT\n "Criterion = 1" AS CritDesc,\n SUM(IIf(Criterion = 1, 1, 0)) AS NumCrit,\n COUNT(*) AS TotalNum,\n SUM(IIf(Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,\n ParentNumber AS Parent\nFROM\n tblChild\nGROUP BY\n ParentNumber;\n
\nNote: I dropped the WHERE-clause. Instead I am counting the records fulfilling the criterion by summing up 1 for Criterion = 1 and 0 otherwise. This allows me to get the total number per ParentNumber at the same time with Count(*).
\n
\nUPDATE
\nYou might want to get results for parents having no children as well. In that case you can use an outer join
\nSELECT\n "Criterion = 1" AS CritDesc,\n SUM(IIf(C.Criterion = 1, 1, 0)) AS NumCrit,\n COUNT(C.Number) AS TotalNumOfChildren,\n SUM(IIf(C.Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,\n P.Number AS Parent\nFROM\n tblChild AS C\n LEFT JOIN tblParent AS P\n ON C.ParentNumber = P.Number \nGROUP BY\n P.Number;\n
\nNote that I get the total number of children with Count(C.Number) as Count(*) would count records with no children as well and yield 1 in that case. In the percentage calculation, however, I divide by Count(*) in order to avoid a division by zero. The result will still be correct in that case, since the sum of records with Criterion = 1 will be zero.
\n
soup wrap:
Your subquery has no where clause and thus counts all records, but you can do it without subquery
SELECT
"Criterion = 1" AS CritDesc,
SUM(IIf(Criterion = 1, 1, 0)) AS NumCrit,
COUNT(*) AS TotalNum,
SUM(IIf(Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,
ParentNumber AS Parent
FROM
tblChild
GROUP BY
ParentNumber;
Note: I dropped the WHERE-clause. Instead I am counting the records fulfilling the criterion by summing up 1 for Criterion = 1 and 0 otherwise. This allows me to get the total number per ParentNumber at the same time with Count(*).
UPDATE
You might want to get results for parents having no children as well. In that case you can use an outer join
SELECT
"Criterion = 1" AS CritDesc,
SUM(IIf(C.Criterion = 1, 1, 0)) AS NumCrit,
COUNT(C.Number) AS TotalNumOfChildren,
SUM(IIf(C.Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,
P.Number AS Parent
FROM
tblChild AS C
LEFT JOIN tblParent AS P
ON C.ParentNumber = P.Number
GROUP BY
P.Number;
Note that I get the total number of children with Count(C.Number) as Count(*) would count records with no children as well and yield 1 in that case. In the percentage calculation, however, I divide by Count(*) in order to avoid a division by zero. The result will still be correct in that case, since the sum of records with Criterion = 1 will be zero.
qid & accept id:
(11350686, 11351593)
query:
Combine results of joins on two tables
soup:
Linked by (tag_id, mark_id)
\nSELECT DISTINCT i.*\nFROM tags_users tu \nJOIN marks_users mu USING (user_id)\nJOIN items i USING (tag_id, mark_id)\nWHERE tu.user_id = 5;\n
\nThe DISTINCT should not be necessary, if you have defined multi-column primary or unique keys on the columns.
\nLinked by tag_id or mark_id
\n@Gordon's answer is perfectly valid. But it will perform terribly.
\nThis will be much faster:
\nSELECT i.*\nFROM items i \nWHERE EXISTS (\n SELECT 1\n FROM tags_users tu\n WHERE tu.tag_id = i.tag_id\n AND tu.user_id = 5\n )\nOR EXISTS (\n SELECT 1\n FROM marks_users mu \n WHERE mu.mark_id = i.mark_id\n AND mu.user_id = 5\n );\n
\nAssumes that entries in items itself are UNIQUE on (tag_id, mark_id).
\nWhy is this much faster?
\nIf you JOIN to two unrelated tables (like in @Gordon's answer), you effectively form a cross join, which are known for rapidly degrading performance with growing number of rows. O(N²). Say, you have:
\n\n- 100 users, 100 tags and 100 marks.
\n- Every combination exists (simple hypothetical setup, real life data will be less balanced).
\n- Results in 10,000 rows in each of the tables.
\n
\nThis will happen in @Gordon's query:
\n\n- JOIN rows of
items to tags_users. Each item is joined to 100 rows, resulting in\n10,000 x 100 = 1,000,000 rows. (!) \n- JOIN that to
marks_users. Each row is joined to 100 marks, resulting in\n100,000,000 rows. (!!) \n- The
WHERE clause is applied and the many duplicates are collapsed by DISTINCT, resulting in 10,000 rows. \n
\nTest with EXPLAIN ANALYZE. The difference will be obvious even with small numbers and staggering with growing numbers.
\n\nBenchmarks
\nI ran some quick tests with this setup on my machine (pg 9.1):
\nGordon's query
\nSELECT DISTINCT i.*\nFROM items i\nLEFT JOIN tags_users tu on i.tag_id = tu.tag_id\nLEFT JOIN marks_users mu on i.mark_id = mu.mark_id\nWHERE 5 IN (tu.user_id, mu.user_id);\n
\nTotal runtime: 38229.860 ms
\nSanitized version
\nPulling the condition on user_id into the JOIN clause cuts down on the combinations radically, but it is still a (much tinier) cross join.
\nSELECT DISTINCT i.*\nFROM items i\nLEFT JOIN tags_users tu on i.tag_id = tu.tag_id AND tu.user_id = 5\nLEFT JOIN marks_users mu on i.mark_id = mu.mark_id AND mu.user_id = 5\nWHERE tu.user_id = 5 OR mu.user_id = 5;\n
\nTotal runtime: 110.450 ms
\nWith EXISTS semi-joins
\n(see query above)
\nWith this query, every row is checked once if it qualifies. You don't need a DISTINCT, because rows are not duplicated to begin with.
\nTotal runtime: 26.569 ms
\nUNION
\nFor completeness, the variant with UNION. Use UNION, not UNION ALL to remove duplicates:
\nSELECT i.*\nFROM items i \nJOIN tags_users tu ON i.tag_id = tu.tag_id AND tu.user_id = 5\nUNION\nSELECT i.*\nFROM items i \nJOIN marks_users mu ON i.mark_id = mu.mark_id AND mu.user_id = 5;\n
\nTotal runtime: 178.901 ms
\n
soup wrap:
Linked by (tag_id, mark_id)
SELECT DISTINCT i.*
FROM tags_users tu
JOIN marks_users mu USING (user_id)
JOIN items i USING (tag_id, mark_id)
WHERE tu.user_id = 5;
The DISTINCT should not be necessary, if you have defined multi-column primary or unique keys on the columns.
Linked by tag_id or mark_id
@Gordon's answer is perfectly valid. But it will perform terribly.
This will be much faster:
SELECT i.*
FROM items i
WHERE EXISTS (
SELECT 1
FROM tags_users tu
WHERE tu.tag_id = i.tag_id
AND tu.user_id = 5
)
OR EXISTS (
SELECT 1
FROM marks_users mu
WHERE mu.mark_id = i.mark_id
AND mu.user_id = 5
);
Assumes that entries in items itself are UNIQUE on (tag_id, mark_id).
Why is this much faster?
If you JOIN to two unrelated tables (like in @Gordon's answer), you effectively form a cross join, which are known for rapidly degrading performance with growing number of rows. O(N²). Say, you have:
- 100 users, 100 tags and 100 marks.
- Every combination exists (simple hypothetical setup, real life data will be less balanced).
- Results in 10,000 rows in each of the tables.
This will happen in @Gordon's query:
- JOIN rows of
items to tags_users. Each item is joined to 100 rows, resulting in
10,000 x 100 = 1,000,000 rows. (!)
- JOIN that to
marks_users. Each row is joined to 100 marks, resulting in
100,000,000 rows. (!!)
- The
WHERE clause is applied and the many duplicates are collapsed by DISTINCT, resulting in 10,000 rows.
Test with EXPLAIN ANALYZE. The difference will be obvious even with small numbers and staggering with growing numbers.
Benchmarks
I ran some quick tests with this setup on my machine (pg 9.1):
Gordon's query
SELECT DISTINCT i.*
FROM items i
LEFT JOIN tags_users tu on i.tag_id = tu.tag_id
LEFT JOIN marks_users mu on i.mark_id = mu.mark_id
WHERE 5 IN (tu.user_id, mu.user_id);
Total runtime: 38229.860 ms
Sanitized version
Pulling the condition on user_id into the JOIN clause cuts down on the combinations radically, but it is still a (much tinier) cross join.
SELECT DISTINCT i.*
FROM items i
LEFT JOIN tags_users tu on i.tag_id = tu.tag_id AND tu.user_id = 5
LEFT JOIN marks_users mu on i.mark_id = mu.mark_id AND mu.user_id = 5
WHERE tu.user_id = 5 OR mu.user_id = 5;
Total runtime: 110.450 ms
With EXISTS semi-joins
(see query above)
With this query, every row is checked once if it qualifies. You don't need a DISTINCT, because rows are not duplicated to begin with.
Total runtime: 26.569 ms
UNION
For completeness, the variant with UNION. Use UNION, not UNION ALL to remove duplicates:
SELECT i.*
FROM items i
JOIN tags_users tu ON i.tag_id = tu.tag_id AND tu.user_id = 5
UNION
SELECT i.*
FROM items i
JOIN marks_users mu ON i.mark_id = mu.mark_id AND mu.user_id = 5;
Total runtime: 178.901 ms
qid & accept id:
(11363669, 11364612)
query:
Oracle timeline report from overlapping intervals
soup:
You can do it in one whooping statement:
\nSQL> WITH timeline AS\n 2 (SELECT mydate startdate,\n 3 lead(mydate) OVER (ORDER BY mydate) - 1 enddate\n 4 FROM (SELECT startdate mydate FROM interval_test\n 5 UNION\n 6 SELECT enddate FROM interval_test)\n 7 WHERE mydate IS NOT NULL)\n 8 SELECT startdate,\n 9 enddate,\n 10 max(substr(sys_connect_by_path(item, ','), 2)) items\n 11 FROM (SELECT t.startdate,\n 12 t.enddate,\n 13 item,\n 14 row_number() OVER (PARTITION BY t.startdate, t.enddate\n 15 ORDER BY i.item) rn\n 16 FROM timeline t\n 17 JOIN\n 18 interval_test i\n 19 ON nvl(i.enddate, DATE '9999-12-31') - 1 >= t.startdate\n 20 AND i.startdate <= nvl(t.enddate, DATE '9999-12-31'))\n 21 START WITH rn = 1\n 22 CONNECT BY rn = PRIOR rn + 1\n 23 AND startdate = PRIOR startdate\n 24 GROUP BY startdate, enddate\n 25 ORDER BY startdate;\n\nSTARTDATE ENDDATE ITEMS\n---------- ---------- --------------------\n2012-01-01 2012-01-31 AAA\n2012-02-01 2012-02-29 AAA,BBB\n2012-03-01 AAA\n
\nI used a first subquery to list all intervals:
\nSQL> SELECT mydate startdate,\n 2 lead(mydate) OVER (ORDER BY mydate) - 1 enddate\n 3 FROM (SELECT startdate mydate FROM interval_test\n 4 UNION\n 5 SELECT enddate FROM interval_test)\n 6 WHERE mydate IS NOT NULL;\n\nSTARTDATE ENDDATE\n---------- ----------\n2012-01-01 2012-01-31\n2012-02-01 2012-02-29\n2012-03-01\n
\njoined to the following query that lists all items on one row given two dates:
\nSELECT max(substr(sys_connect_by_path(item, ','), 2)) items\n FROM (SELECT item, row_number() OVER (ORDER BY item) rn\n FROM interval_test\n WHERE nvl(enddate, DATE '9999-12-31') >= :startdate\n AND startdate <= :enddate)\nCONNECT BY rn = PRIOR rn + 1\nSTART WITH rn = 1;\n
\n
soup wrap:
You can do it in one whooping statement:
SQL> WITH timeline AS
2 (SELECT mydate startdate,
3 lead(mydate) OVER (ORDER BY mydate) - 1 enddate
4 FROM (SELECT startdate mydate FROM interval_test
5 UNION
6 SELECT enddate FROM interval_test)
7 WHERE mydate IS NOT NULL)
8 SELECT startdate,
9 enddate,
10 max(substr(sys_connect_by_path(item, ','), 2)) items
11 FROM (SELECT t.startdate,
12 t.enddate,
13 item,
14 row_number() OVER (PARTITION BY t.startdate, t.enddate
15 ORDER BY i.item) rn
16 FROM timeline t
17 JOIN
18 interval_test i
19 ON nvl(i.enddate, DATE '9999-12-31') - 1 >= t.startdate
20 AND i.startdate <= nvl(t.enddate, DATE '9999-12-31'))
21 START WITH rn = 1
22 CONNECT BY rn = PRIOR rn + 1
23 AND startdate = PRIOR startdate
24 GROUP BY startdate, enddate
25 ORDER BY startdate;
STARTDATE ENDDATE ITEMS
---------- ---------- --------------------
2012-01-01 2012-01-31 AAA
2012-02-01 2012-02-29 AAA,BBB
2012-03-01 AAA
I used a first subquery to list all intervals:
SQL> SELECT mydate startdate,
2 lead(mydate) OVER (ORDER BY mydate) - 1 enddate
3 FROM (SELECT startdate mydate FROM interval_test
4 UNION
5 SELECT enddate FROM interval_test)
6 WHERE mydate IS NOT NULL;
STARTDATE ENDDATE
---------- ----------
2012-01-01 2012-01-31
2012-02-01 2012-02-29
2012-03-01
joined to the following query that lists all items on one row given two dates:
SELECT max(substr(sys_connect_by_path(item, ','), 2)) items
FROM (SELECT item, row_number() OVER (ORDER BY item) rn
FROM interval_test
WHERE nvl(enddate, DATE '9999-12-31') >= :startdate
AND startdate <= :enddate)
CONNECT BY rn = PRIOR rn + 1
START WITH rn = 1;
qid & accept id:
(11396151, 11398826)
query:
From within a grails HQL, how would I use a (non-aggregate) Oracle function?
soup:
To call a function in HQL, the SQL dialect must be aware of it. You can add your function at runtime in BootStrap.groovy like this:
\nimport org.hibernate.dialect.function.SQLFunctionTemplate\nimport org.hibernate.Hibernate\n\ndef dialect = applicationContext.sessionFactory.dialect\ndef getCurrentTerm = new SQLFunctionTemplate(Hibernate.INTEGER, "TT_STUDENT.STU_GENERAL.F_Get_Current_term()")\ndialect.registerFunction('F_Get_Current_term', getCurrentTerm)\n
\nOnce registered, you should be able to call the function in your queries:
\ndef a = SaturnStvterm.findAll("from SaturnStvterm as s where id > TT_STUDENT.STU_GENERAL.F_Get_Current_term()")\n
\n
soup wrap:
To call a function in HQL, the SQL dialect must be aware of it. You can add your function at runtime in BootStrap.groovy like this:
import org.hibernate.dialect.function.SQLFunctionTemplate
import org.hibernate.Hibernate
def dialect = applicationContext.sessionFactory.dialect
def getCurrentTerm = new SQLFunctionTemplate(Hibernate.INTEGER, "TT_STUDENT.STU_GENERAL.F_Get_Current_term()")
dialect.registerFunction('F_Get_Current_term', getCurrentTerm)
Once registered, you should be able to call the function in your queries:
def a = SaturnStvterm.findAll("from SaturnStvterm as s where id > TT_STUDENT.STU_GENERAL.F_Get_Current_term()")
qid & accept id:
(11404664, 11407741)
query:
SQL Update most recent in table instead of most recent on selected record
soup:
The problem is that you're not correlating your subquery with your outer query. It helps to use different aliases for all tables involved, and the join to Members inside the subquery seems unnecessary:
\ncreate table Members (ID int not null,Attend_Freq int not null,Last_Attend_Date datetime not null)\ninsert into Members (ID,Attend_Freq,Last_Attend_Date) values\n(123,4,'19000101')\n\ncreate table Attendance (ID int not null,Member_ID int not null,Last_Attend_Date datetime not null)\ninsert into Attendance (ID,Member_ID,Last_Attend_Date) values\n(987,123,'20120605'),\n(888,123,'20120604'),\n(567,123,'20120603'),\n(456,234,'20120630'),\n(1909,292,'20120705')\n\nupdate M\nset\n Last_Attend_Date =\n (select MAX(Last_Attend_Date)\n from Attendance A2\n where A2.Member_ID = M.ID) --M is a reference to the outer table here\nfrom\n Members M\n inner join\n Attendance A\n on\n M.ID = A.Member_ID\nwhere\n m.Attend_Freq < 5 and\n A.Last_Attend_Date < DATEADD(day,-14,CURRENT_TIMESTAMP)\n\nselect * from Members\n
\nResult:
\nID Attend_Freq Last_Attend_Date\n----------- ----------- ----------------\n123 4 2012-06-05\n
\n
soup wrap:
The problem is that you're not correlating your subquery with your outer query. It helps to use different aliases for all tables involved, and the join to Members inside the subquery seems unnecessary:
create table Members (ID int not null,Attend_Freq int not null,Last_Attend_Date datetime not null)
insert into Members (ID,Attend_Freq,Last_Attend_Date) values
(123,4,'19000101')
create table Attendance (ID int not null,Member_ID int not null,Last_Attend_Date datetime not null)
insert into Attendance (ID,Member_ID,Last_Attend_Date) values
(987,123,'20120605'),
(888,123,'20120604'),
(567,123,'20120603'),
(456,234,'20120630'),
(1909,292,'20120705')
update M
set
Last_Attend_Date =
(select MAX(Last_Attend_Date)
from Attendance A2
where A2.Member_ID = M.ID) --M is a reference to the outer table here
from
Members M
inner join
Attendance A
on
M.ID = A.Member_ID
where
m.Attend_Freq < 5 and
A.Last_Attend_Date < DATEADD(day,-14,CURRENT_TIMESTAMP)
select * from Members
Result:
ID Attend_Freq Last_Attend_Date
----------- ----------- ----------------
123 4 2012-06-05
qid & accept id:
(11419308, 11429318)
query:
how to pass javascript array to oracle store procedure by ado parameter object
soup:
The format is:
\nCreateParameter( name, type, direction, size, value )\n
\nThe values you'll need are:
\nadVarChar = 200\nAdArray = 0x2000\nadParamInput = 1\n
\nAnd you'll call it like:
\nvar param = cmd.CreateParameter( 'par', adVarChar + AdArray, adParamInput, 255, userArray )\n
\n
soup wrap:
The format is:
CreateParameter( name, type, direction, size, value )
The values you'll need are:
adVarChar = 200
AdArray = 0x2000
adParamInput = 1
And you'll call it like:
var param = cmd.CreateParameter( 'par', adVarChar + AdArray, adParamInput, 255, userArray )
qid & accept id:
(11419793, 11420154)
query:
Detect role in Postgresql dynamically
soup:
You have to use EXECUTE for dynamic SQL. Also, a DO statement cannot take parameters. Create a plpgsql function:
\nCREATE OR REPLACE FUNCTION f_revoke_all_from_role(_role text)\n RETURNS void AS\n$BODY$\nBEGIN\n\nIF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = _role) THEN\n EXECUTE 'REVOKE ALL PRIVILEGES ON TABLE x FROM ' || quote_ident(_role);\nEND IF;\n\nEND;\n$BODY$ LANGUAGE plpgsql;\n
\nCall:
\nSELECT f_revoke_all_from_role('superman');\n
\n\nIF block is simpler with EXISTS.
\nI use quote_ident() to avoid SQLi.
\nThe table name could be the second parameter of the function ...
\n
\n
soup wrap:
You have to use EXECUTE for dynamic SQL. Also, a DO statement cannot take parameters. Create a plpgsql function:
CREATE OR REPLACE FUNCTION f_revoke_all_from_role(_role text)
RETURNS void AS
$BODY$
BEGIN
IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = _role) THEN
EXECUTE 'REVOKE ALL PRIVILEGES ON TABLE x FROM ' || quote_ident(_role);
END IF;
END;
$BODY$ LANGUAGE plpgsql;
Call:
SELECT f_revoke_all_from_role('superman');
IF block is simpler with EXISTS.
I use quote_ident() to avoid SQLi.
The table name could be the second parameter of the function ...
qid & accept id:
(11426911, 11427306)
query:
Convert sub-subquery with a order+limit 1 to left join
soup:
I think that you might use worksupdates as 'ruling table' and attach the rest there:
\nSELECT works.id, title, version, date, pages, uploaded, uri\n FROM workupdates\n JOIN info ON info.id=workupdates.info\n JOIN works ON workupdates.work = works.id\n WHERE workupdates.date =\n (SELECT MAX(date) FROM workupdates WHERE work = works.id)\n;\n
\nEven if this is sub-optimal, since the JOINs would take place before the filtering on date.
\nOr pivoting the tables around and having works rule, maybe better:
\nSELECT works.id, title, version, date, pages, uploaded, uri\n FROM works\n JOIN workupdates ON (workupdates.work = works.id\n AND workupdates.date =\n (SELECT MAX(date) FROM workupdates WHERE work = works.id))\n JOIN info ON info.id=workupdates.info\n;\n
\nIt ought to be possible to save an iteration when joining worksupdates and works, but it's not coming to me at the moment (and it might be I'm dreaming things up) :-(
\n
soup wrap:
I think that you might use worksupdates as 'ruling table' and attach the rest there:
SELECT works.id, title, version, date, pages, uploaded, uri
FROM workupdates
JOIN info ON info.id=workupdates.info
JOIN works ON workupdates.work = works.id
WHERE workupdates.date =
(SELECT MAX(date) FROM workupdates WHERE work = works.id)
;
Even if this is sub-optimal, since the JOINs would take place before the filtering on date.
Or pivoting the tables around and having works rule, maybe better:
SELECT works.id, title, version, date, pages, uploaded, uri
FROM works
JOIN workupdates ON (workupdates.work = works.id
AND workupdates.date =
(SELECT MAX(date) FROM workupdates WHERE work = works.id))
JOIN info ON info.id=workupdates.info
;
It ought to be possible to save an iteration when joining worksupdates and works, but it's not coming to me at the moment (and it might be I'm dreaming things up) :-(
qid & accept id:
(11436797, 11447658)
query:
Insert blank row to result after ORDER BY
soup:
You can, pretty much as Michael and Gordon did, just tack an empty row on with union all, but you need to have it before the order by:
\n...\nand to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=\n to_date('?DATE2::?','MM/DD/YYYY')\nunion all\nselect null, null, null, null, null, null, null, null\nfrom dual\norder by eventid, starttime, actionsequence;\n
\n... and you can't use the case that Gordon had directly in the order by because it isn't a selected value - you'll get an ORA-07185. (Note that the column names in the order by are the aliases that you assigned in the select, not those in the table; and you don't include the table name/alias; and it isn't necessary to alias the null columns in the union part, but you may want to for clarity).
\nBut this relies on null being sorted after any real values, which may not always be the case (not sure, but might be affected by NLS parameters), and it isn't known if the real eventkey can ever be null anyway. So it's probably safer to introduce a dummy column in both parts of the query and use that for the ordering, but exclude it from the results by nesting the query:
\nselect crewactionfactid, crewkey, eventid, actionsequence, type,\n starttime, endtime, duration\nfrom (\n select 0 as dummy_order_field,\n t.crewactionfactid,\n t.crewkey,\n t.eventkey as eventid,\n t.actionsequence,\n case t.actiontype\n when 'DISPATCHED' then '2-Dispatched'\n when 'ASSIGNED' then '1-Assigned'\n when 'ENROUTE' then '3-Enroute'\n when 'ARRIVED' then '4-Arrived'\n else 'unknown'\n end as type,\n t.startdatetime as starttime,\n t.enddatetime as endtime,\n t.duration\n from schema_name.table_name t\n where to_date(to_char(t.startdatetime, 'DD-MON-YYYY')) >=\n to_date('?DATE1::?','MM/DD/YYYY')\n and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=\n to_date('?DATE2::?','MM/DD/YYYY')\n union all\n select 1, null, null, null, null, null, null, null, null\n from dual\n)\norder by dummy_order_field, eventid, starttime, action sequence;\n
\nThe date handling is odd though, particularly the to_date(to_char(...)) parts. It looks like you're just trying to lose the time portion, in which case you can use trunk instead:
\nwhere trunc(t.startdatetime) >= to_date('?DATE1::?','MM/DD/YYYY')\nand trunc(t.enddatetime) <= to_date('?DATE2::?','MM/DD/YYYY')\n
\nBut applying any function to the date column prevents any index on it being used, so it's better to leave that alone and get the variable part in the right state for comparison:
\nwhere t.startdatetime >= to_date('?DATE1::?','MM/DD/YYYY')\nand t.enddatetime < to_date('?DATE2::?','MM/DD/YYYY') + 1\n
\nThe + 1 adds a day, so id DATE2 was 07/12/2012, the filter is < 2012-07-13 00:00:00, which is the same as <= 2012-07-12 23:59:59.
\n
soup wrap:
You can, pretty much as Michael and Gordon did, just tack an empty row on with union all, but you need to have it before the order by:
...
and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=
to_date('?DATE2::?','MM/DD/YYYY')
union all
select null, null, null, null, null, null, null, null
from dual
order by eventid, starttime, actionsequence;
... and you can't use the case that Gordon had directly in the order by because it isn't a selected value - you'll get an ORA-07185. (Note that the column names in the order by are the aliases that you assigned in the select, not those in the table; and you don't include the table name/alias; and it isn't necessary to alias the null columns in the union part, but you may want to for clarity).
But this relies on null being sorted after any real values, which may not always be the case (not sure, but might be affected by NLS parameters), and it isn't known if the real eventkey can ever be null anyway. So it's probably safer to introduce a dummy column in both parts of the query and use that for the ordering, but exclude it from the results by nesting the query:
select crewactionfactid, crewkey, eventid, actionsequence, type,
starttime, endtime, duration
from (
select 0 as dummy_order_field,
t.crewactionfactid,
t.crewkey,
t.eventkey as eventid,
t.actionsequence,
case t.actiontype
when 'DISPATCHED' then '2-Dispatched'
when 'ASSIGNED' then '1-Assigned'
when 'ENROUTE' then '3-Enroute'
when 'ARRIVED' then '4-Arrived'
else 'unknown'
end as type,
t.startdatetime as starttime,
t.enddatetime as endtime,
t.duration
from schema_name.table_name t
where to_date(to_char(t.startdatetime, 'DD-MON-YYYY')) >=
to_date('?DATE1::?','MM/DD/YYYY')
and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=
to_date('?DATE2::?','MM/DD/YYYY')
union all
select 1, null, null, null, null, null, null, null, null
from dual
)
order by dummy_order_field, eventid, starttime, action sequence;
The date handling is odd though, particularly the to_date(to_char(...)) parts. It looks like you're just trying to lose the time portion, in which case you can use trunk instead:
where trunc(t.startdatetime) >= to_date('?DATE1::?','MM/DD/YYYY')
and trunc(t.enddatetime) <= to_date('?DATE2::?','MM/DD/YYYY')
But applying any function to the date column prevents any index on it being used, so it's better to leave that alone and get the variable part in the right state for comparison:
where t.startdatetime >= to_date('?DATE1::?','MM/DD/YYYY')
and t.enddatetime < to_date('?DATE2::?','MM/DD/YYYY') + 1
The + 1 adds a day, so id DATE2 was 07/12/2012, the filter is < 2012-07-13 00:00:00, which is the same as <= 2012-07-12 23:59:59.
qid & accept id:
(11441696, 11441990)
query:
Merging matching data side by side from different tables
soup:
If you have six different tables, then you need to join them together:
\nselect tjan.companyname, tjan.employee, tjan.id, , \nfrom tjan join\n tfeb \n on tjan.companyname = tfeb.companyname and\n tjan.employee = tfeb.employee and\n tjan.id = tfeb.id\netc. etc. etc.\n
\nThe problem that you have is that the populations in the different months may be different, so the joins will lose rows. A good way to handle this is with a driving table:
\nselect . . .\nfrom (select companyname, employee, id from tjan union\n select companyname, employee, id from tfeb union\n . . .\n ) driving left outer join\n tjan\n on tjan.companyname = driving.companyname and\n tjan.employee = driving.employee and\n tjan.id = driving.id left outer join\n tfeb\n on tfeb.companyname = driving.companyname and\n tfeb.employee = driving.employee and\n tfeb.id = driving.id left outer join\n . . .\n
\nYou can do all this in one SQL statement. There are repetitive parts (such as the column names in the select). Consider using Excel to generate these.
\n
soup wrap:
If you have six different tables, then you need to join them together:
select tjan.companyname, tjan.employee, tjan.id, ,
from tjan join
tfeb
on tjan.companyname = tfeb.companyname and
tjan.employee = tfeb.employee and
tjan.id = tfeb.id
etc. etc. etc.
The problem that you have is that the populations in the different months may be different, so the joins will lose rows. A good way to handle this is with a driving table:
select . . .
from (select companyname, employee, id from tjan union
select companyname, employee, id from tfeb union
. . .
) driving left outer join
tjan
on tjan.companyname = driving.companyname and
tjan.employee = driving.employee and
tjan.id = driving.id left outer join
tfeb
on tfeb.companyname = driving.companyname and
tfeb.employee = driving.employee and
tfeb.id = driving.id left outer join
. . .
You can do all this in one SQL statement. There are repetitive parts (such as the column names in the select). Consider using Excel to generate these.
qid & accept id:
(11445551, 11445879)
query:
How can I update extreme columns within range fast?
soup:
I'm not sure what the performance of this will be like, but it's a more set-based approach than your current one:
\ndeclare @T table (CategoryID int not null,Time datetime2 not null,IsSampled bit not null,Value decimal(10,5) not null)\ninsert into @T (CategoryID,Time,IsSampled,Value) values\n(1,'2012-07-01T00:00:00.000',0,65.36347),\n(1,'2012-07-01T00:00:11.000',0,80.16729),\n(1,'2012-07-01T00:00:14.000',0,29.19716),\n(1,'2012-07-01T00:00:25.000',0,7.05847),\n(1,'2012-07-01T00:00:36.000',0,98.08257),\n(1,'2012-07-01T00:00:57.000',0,75.35524),\n(1,'2012-07-01T00:00:59.000',0,35.35524)\n\n;with BinnedValues as (\n select CategoryID,Time,IsSampled,Value,DATEADD(minute,DATEDIFF(minute,0,Time),0) as TimeBin\n from @T\n), MinMax as (\n select CategoryID,Time,IsSampled,Value,TimeBin,\n ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value) as MinPos,\n ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value desc) as MaxPos,\n ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Time) as Earliest\n from\n BinnedValues\n)\nupdate MinMax set IsSampled = 1 where MinPos=1 or MaxPos=1 or Earliest=1\n\nselect * from @T\n
\nResult:
\nCategoryID Time IsSampled Value\n----------- ---------------------- --------- ---------------------------------------\n1 2012-07-01 00:00:00.00 1 65.36347\n1 2012-07-01 00:00:11.00 0 80.16729\n1 2012-07-01 00:00:14.00 0 29.19716\n1 2012-07-01 00:00:25.00 1 7.05847\n1 2012-07-01 00:00:36.00 1 98.08257\n1 2012-07-01 00:00:57.00 0 75.35524\n1 2012-07-01 00:00:59.00 0 35.35524\n
\nIt could possibly be sped up if the TimeBin column could be added as a computed column to the table and added to appropriate indexes.
\nIt should also be noted that this will mark a maximum of 3 rows as sampled - if the earliest is also the min or max value, it will only be marked once (obviously), but the next nearest min or max value will not be. Also, if multiple rows have the same Value, and that is the min or max value, one of the rows will be selected arbitrarily.
\n
soup wrap:
I'm not sure what the performance of this will be like, but it's a more set-based approach than your current one:
declare @T table (CategoryID int not null,Time datetime2 not null,IsSampled bit not null,Value decimal(10,5) not null)
insert into @T (CategoryID,Time,IsSampled,Value) values
(1,'2012-07-01T00:00:00.000',0,65.36347),
(1,'2012-07-01T00:00:11.000',0,80.16729),
(1,'2012-07-01T00:00:14.000',0,29.19716),
(1,'2012-07-01T00:00:25.000',0,7.05847),
(1,'2012-07-01T00:00:36.000',0,98.08257),
(1,'2012-07-01T00:00:57.000',0,75.35524),
(1,'2012-07-01T00:00:59.000',0,35.35524)
;with BinnedValues as (
select CategoryID,Time,IsSampled,Value,DATEADD(minute,DATEDIFF(minute,0,Time),0) as TimeBin
from @T
), MinMax as (
select CategoryID,Time,IsSampled,Value,TimeBin,
ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value) as MinPos,
ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value desc) as MaxPos,
ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Time) as Earliest
from
BinnedValues
)
update MinMax set IsSampled = 1 where MinPos=1 or MaxPos=1 or Earliest=1
select * from @T
Result:
CategoryID Time IsSampled Value
----------- ---------------------- --------- ---------------------------------------
1 2012-07-01 00:00:00.00 1 65.36347
1 2012-07-01 00:00:11.00 0 80.16729
1 2012-07-01 00:00:14.00 0 29.19716
1 2012-07-01 00:00:25.00 1 7.05847
1 2012-07-01 00:00:36.00 1 98.08257
1 2012-07-01 00:00:57.00 0 75.35524
1 2012-07-01 00:00:59.00 0 35.35524
It could possibly be sped up if the TimeBin column could be added as a computed column to the table and added to appropriate indexes.
It should also be noted that this will mark a maximum of 3 rows as sampled - if the earliest is also the min or max value, it will only be marked once (obviously), but the next nearest min or max value will not be. Also, if multiple rows have the same Value, and that is the min or max value, one of the rows will be selected arbitrarily.
qid & accept id:
(11456664, 11456990)
query:
Counting in sql and subas
soup:
Haven't tried it, but I think this should work
\n select NoOfChanges, count (*) from\n ( \n select suba.id, count(*) as NoOfChanges from \n ( select id, service_type from table_name\n group by 1,2) as suba\n group by 1 \n having count (*) > 1 \n )\n subtableb\n group by NoOfChanges \n
\nYou can think of that as
\nselect NoOfChanges, count (*) from subtableb\ngroup by NoOfChanges \n
\nbut subtableb isn't a real table, but the results from your previous query
\n
soup wrap:
Haven't tried it, but I think this should work
select NoOfChanges, count (*) from
(
select suba.id, count(*) as NoOfChanges from
( select id, service_type from table_name
group by 1,2) as suba
group by 1
having count (*) > 1
)
subtableb
group by NoOfChanges
You can think of that as
select NoOfChanges, count (*) from subtableb
group by NoOfChanges
but subtableb isn't a real table, but the results from your previous query
qid & accept id:
(11463090, 11463122)
query:
Single MySQL field with comma separated values
soup:
You can use this solution:
\nSELECT b.filename\nFROM posts a\nINNER JOIN images b ON FIND_IN_SET(b.imageid, a.gallery) > 0\nWHERE a.postid = 3\n
\n\nHowever, you should really normalize your design and use a cross-reference table between posts and images. This would be the best and most efficient way of representing N:M (many-to-many) relationships. Not only is it much more efficient for retrieval, but it will vastly simplify updating and deleting image associations.
\n
\n\n\n...but the comma-separated value is easier to work with as far as the jQuery script I am using to add to it.
\n
\n
\nEven if you properly represented the N:M relationship with a cross-reference table, you can still get the imageid's in CSV format:
\nSuppose you have a posts_has_images table with primary key fields (postid, imageid):
\nYou can use GROUP_CONCAT() to get a CSV of the imageid's for each postid:
\nSELECT postid, GROUP_CONCAT(imageid) AS gallery\nFROM posts_has_images\nGROUP BY postid\n
\n
soup wrap:
You can use this solution:
SELECT b.filename
FROM posts a
INNER JOIN images b ON FIND_IN_SET(b.imageid, a.gallery) > 0
WHERE a.postid = 3
However, you should really normalize your design and use a cross-reference table between posts and images. This would be the best and most efficient way of representing N:M (many-to-many) relationships. Not only is it much more efficient for retrieval, but it will vastly simplify updating and deleting image associations.
...but the comma-separated value is easier to work with as far as the jQuery script I am using to add to it.
Even if you properly represented the N:M relationship with a cross-reference table, you can still get the imageid's in CSV format:
Suppose you have a posts_has_images table with primary key fields (postid, imageid):
You can use GROUP_CONCAT() to get a CSV of the imageid's for each postid:
SELECT postid, GROUP_CONCAT(imageid) AS gallery
FROM posts_has_images
GROUP BY postid
qid & accept id:
(11468551, 11521193)
query:
Getting hours interval between date range
soup:
Thanks all for the suggestions and comments.I finally got a was to solve may problem.
\nBelow is the script to the solution i came up with:
\nDECLARE @start_date datetime = CONVERT(DATETIME,'2012-02-06 23:59:01.000',20);\nDECLARE @end_date datetime = CONVERT(DATETIME,'2012-12-08 23:59:17.000',20);\nDECLARE @org datetime ;\nDECLARE @end datetime ;\nDECLARE @datetable TABLE (h_start datetime, h_end datetime,h_sesc int);\n\nWHILE (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @start_date)+1, 0))) < @end_date\nBEGIN\nSET @org = null;\nSET @org = @start_date;\nSET @end = (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @org)+1, 0)));\nINSERT INTO @datetable (h_start, h_end,h_sesc)\nVALUES(dateadd(second, 0,@org), @end,DATEDIFF(second, @org,@end));\n\nSET @start_date = dateadd(second, 1,@end);\n\nEND;\n\n\nINSERT INTO @datetable (h_start, h_end,h_sesc)\nVALUES(dateadd(second, 0,@start_date), @end_date,DATEDIFF(second, dateadd(second, 0,@start_date),@end_date));\n\nSELECT * FROM @datetable;\n
\nThe above will give the folowing results:
\nh_start h_end h_sesc\n2012-02-06 23:59:01.000 2012-02-06 23:59:59.000 58\n2012-02-07 00:00:00.000 2012-02-07 00:59:59.000 3599\n2012-02-07 01:00:00.000 2012-02-07 01:59:59.000 3599\n2012-02-07 02:00:00.000 2012-02-07 02:59:59.000 3599\n2012-02-07 03:00:00.000 2012-02-07 03:59:59.000 3599\n2012-02-07 04:00:00.000 2012-02-07 04:59:59.000 3599\n2012-02-07 05:00:00.000 2012-02-07 05:59:59.000 3599\n
\n..\n..
\n2012-12-08 18:00:00.000 2012-12-08 18:59:59.000 3599\n2012-12-08 19:00:00.000 2012-12-08 19:59:59.000 3599\n2012-12-08 20:00:00.000 2012-12-08 20:59:59.000 3599\n2012-12-08 21:00:00.000 2012-12-08 21:59:59.000 3599\n2012-12-08 22:00:00.000 2012-12-08 22:59:59.000 3599\n2012-12-08 23:00:00.000 2012-12-08 23:59:17.000 3557\n
\nHope someone will find it useful.
\n
soup wrap:
Thanks all for the suggestions and comments.I finally got a was to solve may problem.
Below is the script to the solution i came up with:
DECLARE @start_date datetime = CONVERT(DATETIME,'2012-02-06 23:59:01.000',20);
DECLARE @end_date datetime = CONVERT(DATETIME,'2012-12-08 23:59:17.000',20);
DECLARE @org datetime ;
DECLARE @end datetime ;
DECLARE @datetable TABLE (h_start datetime, h_end datetime,h_sesc int);
WHILE (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @start_date)+1, 0))) < @end_date
BEGIN
SET @org = null;
SET @org = @start_date;
SET @end = (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @org)+1, 0)));
INSERT INTO @datetable (h_start, h_end,h_sesc)
VALUES(dateadd(second, 0,@org), @end,DATEDIFF(second, @org,@end));
SET @start_date = dateadd(second, 1,@end);
END;
INSERT INTO @datetable (h_start, h_end,h_sesc)
VALUES(dateadd(second, 0,@start_date), @end_date,DATEDIFF(second, dateadd(second, 0,@start_date),@end_date));
SELECT * FROM @datetable;
The above will give the folowing results:
h_start h_end h_sesc
2012-02-06 23:59:01.000 2012-02-06 23:59:59.000 58
2012-02-07 00:00:00.000 2012-02-07 00:59:59.000 3599
2012-02-07 01:00:00.000 2012-02-07 01:59:59.000 3599
2012-02-07 02:00:00.000 2012-02-07 02:59:59.000 3599
2012-02-07 03:00:00.000 2012-02-07 03:59:59.000 3599
2012-02-07 04:00:00.000 2012-02-07 04:59:59.000 3599
2012-02-07 05:00:00.000 2012-02-07 05:59:59.000 3599
..
..
2012-12-08 18:00:00.000 2012-12-08 18:59:59.000 3599
2012-12-08 19:00:00.000 2012-12-08 19:59:59.000 3599
2012-12-08 20:00:00.000 2012-12-08 20:59:59.000 3599
2012-12-08 21:00:00.000 2012-12-08 21:59:59.000 3599
2012-12-08 22:00:00.000 2012-12-08 22:59:59.000 3599
2012-12-08 23:00:00.000 2012-12-08 23:59:17.000 3557
Hope someone will find it useful.
qid & accept id:
(11480527, 11480595)
query:
MySQL - Select the least day of the current month/year, not necessarily the first day of the month
soup:
If you are interested in returning only one row, the easiest way to do this would be:
\nSELECT t.*\n FROM table_name t\n WHERE t.name = '$username'\n AND t.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n AND t.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n ORDER BY s.name DESC, s.theDate DESC\n LIMIT 1\n
\nThe ORDER BY is based on the assumption that you have an index on (or with leading columns of) (name,theDate). That would be the most appropriate index for the predicates (i.e. conditions in the WHERE clause. There's really no need for us to sort the name column, since we know it's going to be equal to something... but specifying the ORDER BY in this way makes it more likely MySQL will do a reverse scan operation on the index, to return the rows in the correct order, avoiding a filesort operation.
\nNOTE: I specify the bare theDate column in the conditions in the WHERE clause, rather than wrapping that in any function... by specifying the bare column and a bounded range, we enable MySQL to make use of an index range scan operation. There are other possible ways to include this condition in the WHERE clause, for example...
\nDATE_FORMAT(t.theDate,'%Y-%m') = DATE_FORMAT(NOW(),'%Y-%m')\n
\nwhich will return an equivalent result, but a predicate like this is not sargable. That is, MySQL can't/won't do a range scan on an index to satisfy this.
\nIf you are intending to get all the rows for the "least" date in a month for a given user (your question doesn't seem to indicate that you need only one row), here's one way get that result:
\nSELECT t.* \n FROM table_name t\n JOIN ( SELECT s.name\n , s.theDate\n FROM table_name s \n WHERE s.name = '$username'\n AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n ORDER BY s.name DESC, s.theDate DESC\n LIMIT 1\n ) r\n ON r.name = t.name\n AND r.theDate = t.theDate \n
\nAgain, MySQL can make use of an index (if available) with leading columns (name,theDate) to satisfy the predicates, and to do a reverse scan operation (avoiding a sort), and to do the JOIN operation.
\nNOTE: We're assuming here that 'theDate' is datatype DATE (with no time component). If it's a DATETIME or a TIMESTAMP, there's a potential for a time component, and that query may not return all rows for a given "date" value, if the time components are different for the rows with the same "date". (e.g. '2012-07-13 17:30' and '2012-07-13 19:55' are different datetime values.) If we want to return both of those rows (because both are a date of "July 13"), we need to do a range scan instead of an equality test.
\nSELECT t.* \n FROM table_name t\n JOIN ( SELECT s.name\n , s.theDate\n FROM table_name s \n WHERE s.name = '$username'\n AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n ORDER BY s.name DESC, s.theDate DESC\n LIMIT 1\n ) r\n ON t.name = r.name \n AND t.theDate >= r.theDate\n AND t.theDate < DATE_FORMAT(DATE_ADD(r.theDate,INTERVAL 1 DAY),'%Y-%m-%d')\n
\nNote those last two lines... we're looking for any rows with a theDate value that is greater than or equal to the "least" value found for the current month AND that is ALSO less than midnight of the following day.
\n
soup wrap:
If you are interested in returning only one row, the easiest way to do this would be:
SELECT t.*
FROM table_name t
WHERE t.name = '$username'
AND t.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
AND t.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
ORDER BY s.name DESC, s.theDate DESC
LIMIT 1
The ORDER BY is based on the assumption that you have an index on (or with leading columns of) (name,theDate). That would be the most appropriate index for the predicates (i.e. conditions in the WHERE clause. There's really no need for us to sort the name column, since we know it's going to be equal to something... but specifying the ORDER BY in this way makes it more likely MySQL will do a reverse scan operation on the index, to return the rows in the correct order, avoiding a filesort operation.
NOTE: I specify the bare theDate column in the conditions in the WHERE clause, rather than wrapping that in any function... by specifying the bare column and a bounded range, we enable MySQL to make use of an index range scan operation. There are other possible ways to include this condition in the WHERE clause, for example...
DATE_FORMAT(t.theDate,'%Y-%m') = DATE_FORMAT(NOW(),'%Y-%m')
which will return an equivalent result, but a predicate like this is not sargable. That is, MySQL can't/won't do a range scan on an index to satisfy this.
If you are intending to get all the rows for the "least" date in a month for a given user (your question doesn't seem to indicate that you need only one row), here's one way get that result:
SELECT t.*
FROM table_name t
JOIN ( SELECT s.name
, s.theDate
FROM table_name s
WHERE s.name = '$username'
AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
ORDER BY s.name DESC, s.theDate DESC
LIMIT 1
) r
ON r.name = t.name
AND r.theDate = t.theDate
Again, MySQL can make use of an index (if available) with leading columns (name,theDate) to satisfy the predicates, and to do a reverse scan operation (avoiding a sort), and to do the JOIN operation.
NOTE: We're assuming here that 'theDate' is datatype DATE (with no time component). If it's a DATETIME or a TIMESTAMP, there's a potential for a time component, and that query may not return all rows for a given "date" value, if the time components are different for the rows with the same "date". (e.g. '2012-07-13 17:30' and '2012-07-13 19:55' are different datetime values.) If we want to return both of those rows (because both are a date of "July 13"), we need to do a range scan instead of an equality test.
SELECT t.*
FROM table_name t
JOIN ( SELECT s.name
, s.theDate
FROM table_name s
WHERE s.name = '$username'
AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
ORDER BY s.name DESC, s.theDate DESC
LIMIT 1
) r
ON t.name = r.name
AND t.theDate >= r.theDate
AND t.theDate < DATE_FORMAT(DATE_ADD(r.theDate,INTERVAL 1 DAY),'%Y-%m-%d')
Note those last two lines... we're looking for any rows with a theDate value that is greater than or equal to the "least" value found for the current month AND that is ALSO less than midnight of the following day.
qid & accept id:
(11495713, 11495825)
query:
Return results of query based on todays date in SQL (MySQL) Part 2
soup:
You'll want to first JOIN the other table onto the first using related columns (I'm assuming id in the other table is related to table_c_id).
\nAnd as I had stated in my answer to your previous question, you're better off making the comparison on the bare datetime column so that the query remains sargable(i.e. able to utilize indexes):
\nSELECT a.value\nFROM table_c a\nINNER JOIN table_a b ON a.table_c_id = b.id\nWHERE a.table_c_id IN (9,17,25) AND\n b.crm_date_time_column >= UNIX_TIMESTAMP(CURDATE())\nGROUP BY a.value \n
\nThis assumes the crm_date_time_column will never contain times which are in the future (e.g. tomorrow, next month, etc.), but if it can, you would just add:
\nAND b.crm_date_time_column < UNIX_TIMESTAMP(CURDATE() + INTERVAL 1 DAY)\n
\nas another condition in the WHERE clause.
\n
soup wrap:
You'll want to first JOIN the other table onto the first using related columns (I'm assuming id in the other table is related to table_c_id).
And as I had stated in my answer to your previous question, you're better off making the comparison on the bare datetime column so that the query remains sargable(i.e. able to utilize indexes):
SELECT a.value
FROM table_c a
INNER JOIN table_a b ON a.table_c_id = b.id
WHERE a.table_c_id IN (9,17,25) AND
b.crm_date_time_column >= UNIX_TIMESTAMP(CURDATE())
GROUP BY a.value
This assumes the crm_date_time_column will never contain times which are in the future (e.g. tomorrow, next month, etc.), but if it can, you would just add:
AND b.crm_date_time_column < UNIX_TIMESTAMP(CURDATE() + INTERVAL 1 DAY)
as another condition in the WHERE clause.
qid & accept id:
(11510950, 11511356)
query:
Which values are missing in SQL from a list?
soup:
You could also try using EXCEPT (similar to MINUS in Oracle):
\n(SELECT 1\nUNION\nSELECT 2\nUNION \nSELECT 3\nUNION\nSELECT 4\nUNION\nSELECT 5\nUNION\nSELECT 6)\nEXCEPT\n(SELECT 2\n UNION\n SELECT 3\n UNION\n SELECT 4)\n
\nOr, more relevant to your example:
\n(SELECT 1\nUNION\nSELECT 2\nUNION \nSELECT 3\nUNION\nSELECT 4\nUNION\nSELECT 5\nUNION\nSELECT 6)\nEXCEPT\n(SELECT Field FROM Table) \n
\nwhere Field contains 2, 4, and 5.
\n
soup wrap:
You could also try using EXCEPT (similar to MINUS in Oracle):
(SELECT 1
UNION
SELECT 2
UNION
SELECT 3
UNION
SELECT 4
UNION
SELECT 5
UNION
SELECT 6)
EXCEPT
(SELECT 2
UNION
SELECT 3
UNION
SELECT 4)
Or, more relevant to your example:
(SELECT 1
UNION
SELECT 2
UNION
SELECT 3
UNION
SELECT 4
UNION
SELECT 5
UNION
SELECT 6)
EXCEPT
(SELECT Field FROM Table)
where Field contains 2, 4, and 5.
qid & accept id:
(11568694, 11568814)
query:
SQL relational insert to 2 tables in single query without resorting to mysql_insert_id()
soup:
thanks to @hackattack, who found this ? answered already elsewhere.
\nBEGIN\nINSERT INTO users (username, password) \n VALUES('test', 'test')\nINSERT INTO profiles (userid, bio, homepage) \n VALUES(LAST_INSERT_ID(),'Hello world!', 'http://www.stackoverflow.com');\nCOMMIT;\n
\nBUT, ALAS - that didn't work.\nThe MySQL 5 reference shows it slightly different syntax:
\nINSERT INTO `table2` (`description`) \n VALUES('sdfsdf');# 1 row affected.\nINSERT INTO `table1`(`table1_id`,`title`) \n VALUES(LAST_INSERT_ID(),'hello world');\n
\nAnd, lo/behold - that works!
\nMore trouble ahead\nAlthough the query will succeed in phpMyAdmin, my PHP installation complains about the query and throws a syntax error. I resorted to doing this the php-way and making 2 separate queries and using mysql_insert_id()
\nI find that annoying, but I guess that's not much less server load than a transaction.
\n
soup wrap:
thanks to @hackattack, who found this ? answered already elsewhere.
BEGIN
INSERT INTO users (username, password)
VALUES('test', 'test')
INSERT INTO profiles (userid, bio, homepage)
VALUES(LAST_INSERT_ID(),'Hello world!', 'http://www.stackoverflow.com');
COMMIT;
BUT, ALAS - that didn't work.
The MySQL 5 reference shows it slightly different syntax:
INSERT INTO `table2` (`description`)
VALUES('sdfsdf');# 1 row affected.
INSERT INTO `table1`(`table1_id`,`title`)
VALUES(LAST_INSERT_ID(),'hello world');
And, lo/behold - that works!
More trouble ahead
Although the query will succeed in phpMyAdmin, my PHP installation complains about the query and throws a syntax error. I resorted to doing this the php-way and making 2 separate queries and using mysql_insert_id()
I find that annoying, but I guess that's not much less server load than a transaction.
qid & accept id:
(11696995, 11697220)
query:
SQL: retrieve records between dates in all databases
soup:
There's no need for the Date(...) as far as i can tell. This example seems to work
\nDECLARE @TheDate Date = '2012-07-01';\n\nSELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')\n--None returned\nSET @TheDate = '2012-05-01'\n\nSELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')\n--selects hello\n
\nEdit Btw worth looking at This Question with the date time answer (will post here just to save effort)
\nThe between statement can cause issues with range boundaries for dates as
\nBETWEEN '01/01/2009' AND '01/31/2009'\n
\nis really interpreted as
\nBETWEEN '01/01/2009 00:00:00' AND '01/31/2009 00:00:00'\n
\nso will miss anything that occurred during the day of Jan 31st. In this case, you will have to use:
\nmyDate >= '01/01/2009 00:00:00' AND myDate < '02/01/2009 00:00:00' --CORRECT!\n
\nor
\nBETWEEN '01/01/2009 00:00:00' AND '01/31/2009 23:59:59' --WRONG! (see update!)\n
\nUPDATE: It is entirely possible to have records created within that last second of the day, with a datetime as late as 01/01/2009 23:59:59.997!!
\nFor this reason, the BETWEEN (firstday) AND (lastday 23:59:59) approach is not recommended.
\nUse the myDate >= (firstday) AND myDate < (Lastday+1) approach instead.
\n
soup wrap:
There's no need for the Date(...) as far as i can tell. This example seems to work
DECLARE @TheDate Date = '2012-07-01';
SELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')
--None returned
SET @TheDate = '2012-05-01'
SELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')
--selects hello
Edit Btw worth looking at This Question with the date time answer (will post here just to save effort)
The between statement can cause issues with range boundaries for dates as
BETWEEN '01/01/2009' AND '01/31/2009'
is really interpreted as
BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 00:00:00'
so will miss anything that occurred during the day of Jan 31st. In this case, you will have to use:
myDate >= '01/01/2009 00:00:00' AND myDate < '02/01/2009 00:00:00' --CORRECT!
or
BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 23:59:59' --WRONG! (see update!)
UPDATE: It is entirely possible to have records created within that last second of the day, with a datetime as late as 01/01/2009 23:59:59.997!!
For this reason, the BETWEEN (firstday) AND (lastday 23:59:59) approach is not recommended.
Use the myDate >= (firstday) AND myDate < (Lastday+1) approach instead.
qid & accept id:
(11753269, 11768001)
query:
string comparing query with chinese chars - Oracle Database
soup:
SQL> create table mytbl (data_col varchar2(200));\n Table created\n SQL> insert into mytbl values('在职'); \n 1 row inserted.\n SQL> commit;\n Commit complete.\n SQL> select * from mytbl where data_col like '%在职%';\n DATA_COL \n -----------\n 在职 \n\n SQL> SELECT * FROM nls_database_parameters where parameter='NLS_CHARACTERSET';\n PARAMETER VALUE \n ------------------------------ ----------------------------------------\n NLS_CHARACTERSET AL32UTF8 \n
\nYour NLS_CHARACTERSET should be set to AL32UTF8. So try
\n SQL> ALTER SESSION SET NLS_CHARACTERSET = 'AL32UTF8';\n
\nAlso make sure that parameter NLS_NCHAR_CHARACTERSET is set to UTF8.
\n SQL> ALTER SESSION SET NLS_NCHAR_CHARACTERSET = 'UTF8';\n
\n
soup wrap:
SQL> create table mytbl (data_col varchar2(200));
Table created
SQL> insert into mytbl values('在职');
1 row inserted.
SQL> commit;
Commit complete.
SQL> select * from mytbl where data_col like '%在职%';
DATA_COL
-----------
在职
SQL> SELECT * FROM nls_database_parameters where parameter='NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_CHARACTERSET AL32UTF8
Your NLS_CHARACTERSET should be set to AL32UTF8. So try
SQL> ALTER SESSION SET NLS_CHARACTERSET = 'AL32UTF8';
Also make sure that parameter NLS_NCHAR_CHARACTERSET is set to UTF8.
SQL> ALTER SESSION SET NLS_NCHAR_CHARACTERSET = 'UTF8';
qid & accept id:
(11762700, 11762828)
query:
How do I get row id of a row in sql server
soup:
SQL Server does not track the order of inserted rows, so there is no reliable way to get that information given your current table structure. Even if employee_id is an IDENTITY column, it is not 100% foolproof to rely on that for order of insertion (since you can fill gaps and even create duplicate ID values using SET IDENTITY_INSERT ON). If employee_id is an IDENTITY column and you are sure that rows aren't manually inserted out of order, you should be able to use this variation of your query to select the data in sequence, newest first:
\nSELECT \n ROW_NUMBER() OVER (ORDER BY EMPLOYEE_ID DESC) AS ID, \n EMPLOYEE_ID,\n EMPLOYEE_NAME \nFROM dbo.CSBCA1_5_FPCIC_2012_EES207201222743\nORDER BY ID;\n
\nYou can make a change to your table to track this information for new rows, but you won't be able to derive it for your existing data (they will all me marked as inserted at the time you make this change).
\nALTER TABLE dbo.CSBCA1_5_FPCIC_2012_EES207201222743 \n-- wow, who named this?\n ADD CreatedDate DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP;\n
\nNote that this may break existing code that just does INSERT INTO dbo.whatever SELECT/VALUES() - e.g. you may have to revisit your code and define a proper, explicit column list.
\n
soup wrap:
SQL Server does not track the order of inserted rows, so there is no reliable way to get that information given your current table structure. Even if employee_id is an IDENTITY column, it is not 100% foolproof to rely on that for order of insertion (since you can fill gaps and even create duplicate ID values using SET IDENTITY_INSERT ON). If employee_id is an IDENTITY column and you are sure that rows aren't manually inserted out of order, you should be able to use this variation of your query to select the data in sequence, newest first:
SELECT
ROW_NUMBER() OVER (ORDER BY EMPLOYEE_ID DESC) AS ID,
EMPLOYEE_ID,
EMPLOYEE_NAME
FROM dbo.CSBCA1_5_FPCIC_2012_EES207201222743
ORDER BY ID;
You can make a change to your table to track this information for new rows, but you won't be able to derive it for your existing data (they will all me marked as inserted at the time you make this change).
ALTER TABLE dbo.CSBCA1_5_FPCIC_2012_EES207201222743
-- wow, who named this?
ADD CreatedDate DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP;
Note that this may break existing code that just does INSERT INTO dbo.whatever SELECT/VALUES() - e.g. you may have to revisit your code and define a proper, explicit column list.
qid & accept id:
(11769527, 11769986)
query:
vb.net comparing two databases then insert or delete
soup:
This query will return all of the rows in the attached table that are not in the local version of the table
\nSELECT * FROM attachedTable \nWHERE col1 NOT IN( SELECT lt.col1 FROM localTable as lt)\n
\nAnd this will do the converse, returning all rows in the local table that are not matched in the remote table.
\nSELECT * FROM localTable \nWHERE col1 NOT IN( SELECT rt.col1 FROM attachedTable As rt)\n
\n
soup wrap:
This query will return all of the rows in the attached table that are not in the local version of the table
SELECT * FROM attachedTable
WHERE col1 NOT IN( SELECT lt.col1 FROM localTable as lt)
And this will do the converse, returning all rows in the local table that are not matched in the remote table.
SELECT * FROM localTable
WHERE col1 NOT IN( SELECT rt.col1 FROM attachedTable As rt)
qid & accept id:
(11783678, 11788849)
query:
How can I find tables which reference a particular row via a foreign key?
soup:
NULL values in referencing columns
\nThis query produces the DML statement to find all rows in all tables, where a column has a foreign-key constraint referencing another table but hold a NULL value in that column:
\nWITH x AS (\n SELECT c.conrelid::regclass AS tbl\n , c.confrelid::regclass AS ftbl\n , quote_ident(k.attname) AS fk\n , quote_ident(pf.attname) AS pk\n FROM pg_constraint c\n JOIN pg_attribute k ON (k.attrelid, k.attnum) = (c.conrelid, c.conkey[1])\n JOIN pg_attribute f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])\n LEFT JOIN pg_constraint p ON p.conrelid = c.conrelid AND p.contype = 'p'\n LEFT JOIN pg_attribute pf ON (pf.attrelid, pf.attnum)\n = (p.conrelid, p.conkey[1])\n WHERE c.contype = 'f'\n AND c.confrelid = 'fk_tbl'::regclass -- references to this tbl\n AND f.attname = 'fk_tbl_id' -- and only to this column\n)\nSELECT string_agg(format(\n'SELECT %L AS tbl\n , %L AS pk\n , %s::text AS pk_val\n , %L AS fk\n , %L AS ftbl\nFROM %1$s WHERE %4$s IS NULL'\n , tbl\n , COALESCE(pk 'NONE')\n , COALESCE(pk 'NULL')\n , fk\n , ftbl), '\nUNION ALL\n') || ';'\nFROM x;\n
\nProduces a query like this:
\nSELECT 'some_tbl' AS tbl\n , 'some_tbl_id' AS pk\n , some_tbl_id::text AS pk_val\n , 'fk_tbl_id' AS fk\n , 'fk_tbl' AS ftbl\nFROM some_tbl WHERE fk_tbl_id IS NULL\nUNION ALL\nSELECT 'other_tbl' AS tbl\n , 'other_tbl_id' AS pk\n , other_tbl_id::text AS pk_val\n , 'some_name_id' AS fk\n , 'fk_tbl' AS ftbl\nFROM other_tbl WHERE some_name_id IS NULL;\n
\nProduces output like this:
\n tbl | pk | pk_val | fk | ftbl\n-----------+--------------+--------+--------------+--------\n some_tbl | some_tbl_id | 49 | fk_tbl_id | fk_tbl\n some_tbl | some_tbl_id | 58 | fk_tbl_id | fk_tbl\n other_tbl | other_tbl_id | 66 | some_name_id | fk_tbl\n other_tbl | other_tbl_id | 67 | some_name_id | fk_tbl\n
\n\nDoes not cover multi-column foreign or primary keys reliably. You have to make the query more complex for this.
\nI cast all primary key values to text to cover all types.
\nAdapt or remove these lines to find foreign key pointing to an other or any column / table:
\nAND c.confrelid = 'fk_tbl'::regclass\nAND f.attname = 'fk_tbl_id' -- and only this column\n
\nTested with PostgreSQL 9.1.4. I use the pg_catalog tables. Realistically nothing of what I use here is going to change, but that is not guaranteed across major releases. Rewrite it with tables from information_schema if you need it to work reliably across updates. That is slower, but sure.
\nI did not sanitize table names in the generated DML script, because quote_ident() would fail with schema-qualified names. It is your responsibility to avoid harmful table names like "users; DELETE * FROM users;". With some more effort, you can retrieve schema-name and table name separately and use quote_ident().
\n
\n
\nNULL values in referenced columns
\nMy first solution does something subtly different from what you ask, because what you describe (as I understand it) is non-existent. The value NULL is "unknown" and cannot be referenced. If you actually want to find rows with a NULL value in a column that has FK constraints pointing to it (not to the particular row with the NULL value, of course), then the query can be much simplified:
\nWITH x AS (\n SELECT c.confrelid::regclass AS ftbl\n ,quote_ident(f.attname) AS fk\n ,quote_ident(pf.attname) AS pk\n ,string_agg(c.conrelid::regclass::text, ', ') AS referencing_tbls\n FROM pg_constraint c\n JOIN pg_attribute f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])\n LEFT JOIN pg_constraint p ON p.conrelid = c.confrelid AND p.contype = 'p'\n LEFT JOIN pg_attribute pf ON (pf.attrelid, pf.attnum)\n = (p.conrelid, p.conkey[1])\n WHERE c.contype = 'f'\n -- AND c.confrelid = 'fk_tbl'::regclass -- only referring this tbl\n GROUP BY 1, 2, 3\n)\nSELECT string_agg(format(\n'SELECT %L AS ftbl\n , %L AS pk\n , %s::text AS pk_val\n , %L AS fk\n , %L AS referencing_tbls\nFROM %1$s WHERE %4$s IS NULL'\n , ftbl\n , COALESCE(pk, 'NONE')\n , COALESCE(pk, 'NULL')\n , fk\n , referencing_tbls), '\nUNION ALL\n') || ';'\nFROM x;\n
\nFinds all such rows in the entire database (commented out the restriction to one table). Tested with Postgres 9.1.4 and works for me.
\nI group multiple tables referencing the same foreign column into one query and add a list of referencing tables to give an overview.
\n
soup wrap:
NULL values in referencing columns
This query produces the DML statement to find all rows in all tables, where a column has a foreign-key constraint referencing another table but hold a NULL value in that column:
WITH x AS (
SELECT c.conrelid::regclass AS tbl
, c.confrelid::regclass AS ftbl
, quote_ident(k.attname) AS fk
, quote_ident(pf.attname) AS pk
FROM pg_constraint c
JOIN pg_attribute k ON (k.attrelid, k.attnum) = (c.conrelid, c.conkey[1])
JOIN pg_attribute f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])
LEFT JOIN pg_constraint p ON p.conrelid = c.conrelid AND p.contype = 'p'
LEFT JOIN pg_attribute pf ON (pf.attrelid, pf.attnum)
= (p.conrelid, p.conkey[1])
WHERE c.contype = 'f'
AND c.confrelid = 'fk_tbl'::regclass -- references to this tbl
AND f.attname = 'fk_tbl_id' -- and only to this column
)
SELECT string_agg(format(
'SELECT %L AS tbl
, %L AS pk
, %s::text AS pk_val
, %L AS fk
, %L AS ftbl
FROM %1$s WHERE %4$s IS NULL'
, tbl
, COALESCE(pk 'NONE')
, COALESCE(pk 'NULL')
, fk
, ftbl), '
UNION ALL
') || ';'
FROM x;
Produces a query like this:
SELECT 'some_tbl' AS tbl
, 'some_tbl_id' AS pk
, some_tbl_id::text AS pk_val
, 'fk_tbl_id' AS fk
, 'fk_tbl' AS ftbl
FROM some_tbl WHERE fk_tbl_id IS NULL
UNION ALL
SELECT 'other_tbl' AS tbl
, 'other_tbl_id' AS pk
, other_tbl_id::text AS pk_val
, 'some_name_id' AS fk
, 'fk_tbl' AS ftbl
FROM other_tbl WHERE some_name_id IS NULL;
Produces output like this:
tbl | pk | pk_val | fk | ftbl
-----------+--------------+--------+--------------+--------
some_tbl | some_tbl_id | 49 | fk_tbl_id | fk_tbl
some_tbl | some_tbl_id | 58 | fk_tbl_id | fk_tbl
other_tbl | other_tbl_id | 66 | some_name_id | fk_tbl
other_tbl | other_tbl_id | 67 | some_name_id | fk_tbl
Does not cover multi-column foreign or primary keys reliably. You have to make the query more complex for this.
I cast all primary key values to text to cover all types.
Adapt or remove these lines to find foreign key pointing to an other or any column / table:
AND c.confrelid = 'fk_tbl'::regclass
AND f.attname = 'fk_tbl_id' -- and only this column
Tested with PostgreSQL 9.1.4. I use the pg_catalog tables. Realistically nothing of what I use here is going to change, but that is not guaranteed across major releases. Rewrite it with tables from information_schema if you need it to work reliably across updates. That is slower, but sure.
I did not sanitize table names in the generated DML script, because quote_ident() would fail with schema-qualified names. It is your responsibility to avoid harmful table names like "users; DELETE * FROM users;". With some more effort, you can retrieve schema-name and table name separately and use quote_ident().
NULL values in referenced columns
My first solution does something subtly different from what you ask, because what you describe (as I understand it) is non-existent. The value NULL is "unknown" and cannot be referenced. If you actually want to find rows with a NULL value in a column that has FK constraints pointing to it (not to the particular row with the NULL value, of course), then the query can be much simplified:
WITH x AS (
SELECT c.confrelid::regclass AS ftbl
,quote_ident(f.attname) AS fk
,quote_ident(pf.attname) AS pk
,string_agg(c.conrelid::regclass::text, ', ') AS referencing_tbls
FROM pg_constraint c
JOIN pg_attribute f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])
LEFT JOIN pg_constraint p ON p.conrelid = c.confrelid AND p.contype = 'p'
LEFT JOIN pg_attribute pf ON (pf.attrelid, pf.attnum)
= (p.conrelid, p.conkey[1])
WHERE c.contype = 'f'
-- AND c.confrelid = 'fk_tbl'::regclass -- only referring this tbl
GROUP BY 1, 2, 3
)
SELECT string_agg(format(
'SELECT %L AS ftbl
, %L AS pk
, %s::text AS pk_val
, %L AS fk
, %L AS referencing_tbls
FROM %1$s WHERE %4$s IS NULL'
, ftbl
, COALESCE(pk, 'NONE')
, COALESCE(pk, 'NULL')
, fk
, referencing_tbls), '
UNION ALL
') || ';'
FROM x;
Finds all such rows in the entire database (commented out the restriction to one table). Tested with Postgres 9.1.4 and works for me.
I group multiple tables referencing the same foreign column into one query and add a list of referencing tables to give an overview.
qid & accept id:
(11793666, 11793730)
query:
Retrieving records from a table within two date variables
soup:
SELECT myColumn\n FROM myTable\n WHERE Date BETWEEN @StartDate AND @EndDate\n
\nEdited: Between clause is inclusive (both dates are included in the result) so if you maybe want to exclude one of the dates in the variable columns better use:
\nSELECT myColumn\n FROM myTable\n WHERE Date >= @StartDate\n AND Date <= @EndDate\n
\n
soup wrap:
SELECT myColumn
FROM myTable
WHERE Date BETWEEN @StartDate AND @EndDate
Edited: Between clause is inclusive (both dates are included in the result) so if you maybe want to exclude one of the dates in the variable columns better use:
SELECT myColumn
FROM myTable
WHERE Date >= @StartDate
AND Date <= @EndDate
qid & accept id:
(11814210, 11814632)
query:
SQL Query Group By Mount And Year
soup:
Try this :
\nDeclare @Sample table \n(Buy datetime ,Qty int)\n\nInsert into @Sample values\n( '01-01-2012' ,1),\n('01-01-2012',1 ),\n('01-02-2012',1 ),\n('01-03-2012',1 ),\n('01-05-2012',1 ),\n('01-07-2012',1 ),\n('01-12-2012',1 )\n\n;with cte as \n(\n select top 12 row_number() over(order by t1.number) as N\n from master..spt_values t1 \n cross join master..spt_values t2\n )\nselect t.N as month,\nisnull(datepart(year,y.buy),'2012') as Year,\nsum(isnull(y.qty,0)) as Quantity\nfrom cte t\nleft join @Sample y\non month(convert(varchar(20),buy,103)) = t.N\ngroup by y.buy,t.N\n
\nCreate a Month table to store the value from 1 to 12 .Instead of master..spt_values you can also use sys.all_objects
\n select row_number() over (order by object_id) as months\n from sys.all_objects \n
\nor use a recursive cte to generate the month table
\n;with cte(N) as \n(\nSelect 1 \nunion all\nSelect 1+N from cte where N<12\n)\nSelect * from cte\n
\nand then use Left join to compare the value from the month table with your table and use isnull function to handle the null values.
\n
soup wrap:
Try this :
Declare @Sample table
(Buy datetime ,Qty int)
Insert into @Sample values
( '01-01-2012' ,1),
('01-01-2012',1 ),
('01-02-2012',1 ),
('01-03-2012',1 ),
('01-05-2012',1 ),
('01-07-2012',1 ),
('01-12-2012',1 )
;with cte as
(
select top 12 row_number() over(order by t1.number) as N
from master..spt_values t1
cross join master..spt_values t2
)
select t.N as month,
isnull(datepart(year,y.buy),'2012') as Year,
sum(isnull(y.qty,0)) as Quantity
from cte t
left join @Sample y
on month(convert(varchar(20),buy,103)) = t.N
group by y.buy,t.N
Create a Month table to store the value from 1 to 12 .Instead of master..spt_values you can also use sys.all_objects
select row_number() over (order by object_id) as months
from sys.all_objects
or use a recursive cte to generate the month table
;with cte(N) as
(
Select 1
union all
Select 1+N from cte where N<12
)
Select * from cte
and then use Left join to compare the value from the month table with your table and use isnull function to handle the null values.
qid & accept id:
(11822599, 11823335)
query:
How to compare oracle date and lotusscript date?
soup:
Create an Oracle date using the to_date function.
\nto_date(,'format')
\nFormat your date as a string for example: 06-05-2012 and this will return an Oracle date:
\nIn plsql that would look like:
\nmy_string := '06-08-2012';\nmy_date := to_date(my_string,'DD-MM-YYYY');\n
\nBut of course you can do this in SQL directly.
\nwhere LAST_MODIFIED > to_date(,)\n
\n
soup wrap:
Create an Oracle date using the to_date function.
to_date(,'format')
Format your date as a string for example: 06-05-2012 and this will return an Oracle date:
In plsql that would look like:
my_string := '06-08-2012';
my_date := to_date(my_string,'DD-MM-YYYY');
But of course you can do this in SQL directly.
where LAST_MODIFIED > to_date(,)
qid & accept id:
(11833448, 11833863)
query:
How to query 2 different date ranges depending on the day it is run
soup:
If you have your query in a view, you might use this:
\nwhere\n Invoice_Date between\n (\n case\n when datepart(dd, getdate()) = 1 then dateadd(mm, -1, getdate())\n else dateadd(dd, -15, getdate())\n end\n )\n and\n (\n case\n when datepart(dd, getdate()) = 1 then dateadd(dd, -1, getdate())\n else dateadd(dd, -1, getdate())\n end\n )\n
\nUPDATE: Ignoring the time
\n(I know it looks ugly.)
\nwhere\n Invoice_Date between\n (\n case\n when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(mm, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n else dateadd(dd, -15, dateadd(dd, datediff(dd, 0, getdate()), 0))\n end\n )\n and\n (\n case\n when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n else dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n end\n )\n
\n
soup wrap:
If you have your query in a view, you might use this:
where
Invoice_Date between
(
case
when datepart(dd, getdate()) = 1 then dateadd(mm, -1, getdate())
else dateadd(dd, -15, getdate())
end
)
and
(
case
when datepart(dd, getdate()) = 1 then dateadd(dd, -1, getdate())
else dateadd(dd, -1, getdate())
end
)
UPDATE: Ignoring the time
(I know it looks ugly.)
where
Invoice_Date between
(
case
when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(mm, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
else dateadd(dd, -15, dateadd(dd, datediff(dd, 0, getdate()), 0))
end
)
and
(
case
when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
else dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
end
)
qid & accept id:
(11844855, 11847599)
query:
linked tables in firebird, discard records that have a specific value in a one to many linked table
soup:
Below are three queries that will do the task:
\nSELECT\n c.*\nFROM\n client c \nWHERE\n NOT EXISTS(SELECT * FROM notes n WHERE n.client_id = c.client_id \n AND n.note = 'do not send')\n
\nor
\nSELECT\n c.*, n.client_id\nFROM\n client.c LEFT JOIN\n (SELECT client_id FROM notes WHERE note = 'do not send') n\n ON c.client_id = n.client_id\nWHERE\n n.client_id IS NULL\n
\nor
\nSELECT\n c.*\nFROM\n client c \nWHERE\n NOT c.client_id IN (SELECT client_id FROM notes n \n WHERE n.note = 'do not send')\n
\n
soup wrap:
Below are three queries that will do the task:
SELECT
c.*
FROM
client c
WHERE
NOT EXISTS(SELECT * FROM notes n WHERE n.client_id = c.client_id
AND n.note = 'do not send')
or
SELECT
c.*, n.client_id
FROM
client.c LEFT JOIN
(SELECT client_id FROM notes WHERE note = 'do not send') n
ON c.client_id = n.client_id
WHERE
n.client_id IS NULL
or
SELECT
c.*
FROM
client c
WHERE
NOT c.client_id IN (SELECT client_id FROM notes n
WHERE n.note = 'do not send')
qid & accept id:
(11847584, 11847747)
query:
Transposing Rows in to colums in SQL Server 2005
soup:
You will need to perform a PIVOT. There are two ways to do this with PIVOT, either a Static Pivot where you code the columns to transform or a Dynamic Pivot which determines the columns at execution.
\nStatic Pivot:
\nSELECT *\nFROM\n(\n SELECT col1, col2\n FROM yourTable\n) x\nPIVOT\n(\n min(col2)\n for col1 in ([A], [B], [C])\n)p\n
\n\nDynamic Pivot:
\nDECLARE @cols AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + QUOTENAME(col1) \n from t1\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nset @query = 'SELECT ' + @cols + ' from \n (\n select col1, col2\n from t1\n ) x\n pivot \n (\n min(col2)\n for col1 in (' + @cols + ')\n ) p '\n\nexecute(@query)\n
\n\nIf you do not want to use the PIVOT function, then you can perform a similar type of query with CASE statements:
\nselect \n SUM(CASE WHEN col1 = 'A' THEN col2 END) as A,\n SUM(CASE WHEN col1 = 'B' THEN col2 END) as B,\n SUM(CASE WHEN col1 = 'C' THEN col2 END) as C\nFROM t1\n
\n\n
soup wrap:
You will need to perform a PIVOT. There are two ways to do this with PIVOT, either a Static Pivot where you code the columns to transform or a Dynamic Pivot which determines the columns at execution.
Static Pivot:
SELECT *
FROM
(
SELECT col1, col2
FROM yourTable
) x
PIVOT
(
min(col2)
for col1 in ([A], [B], [C])
)p
Dynamic Pivot:
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(col1)
from t1
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT ' + @cols + ' from
(
select col1, col2
from t1
) x
pivot
(
min(col2)
for col1 in (' + @cols + ')
) p '
execute(@query)
If you do not want to use the PIVOT function, then you can perform a similar type of query with CASE statements:
select
SUM(CASE WHEN col1 = 'A' THEN col2 END) as A,
SUM(CASE WHEN col1 = 'B' THEN col2 END) as B,
SUM(CASE WHEN col1 = 'C' THEN col2 END) as C
FROM t1
qid & accept id:
(11852951, 11853200)
query:
SQL - Determine count of records active at time
soup:
The following uses correlated subqueries to get the numbers you want. The idea is to count the number of cumulative starts and cumulative ends, up to each time:
\nwith alltimes as\n (select t.*\n from ((select part_start_time as thetime, 1 as IsStart, 0 as IsEnd\n from t\n ) union all\n (select part_end_time, 0 as isStart, 1 as IsEnd\n from t\n )\n ) t\n )\nselect t.*,\n (cumstarts - cumends) as numactive\nfrom (select alltimes.thetime,\n (select sum(isStart)\n from allStarts as where as.part_start_time <= alltimes.thetime\n ) as cumStarts,\n (select sum(isEnd)\n from allStarts as where as.part_end_time <= alltimes.thetime\n ) as cumEnds\n from alltimes\n ) t\n
\nThe output is based on each time present in the data.
\nAs a rule of thumb, you don't want to be doing lots of data work on the application side. When possible, that is best done in the database.
\nThis query will have duplicates when there are multiple starts and ends at the same time. In this case, you would need to determine how to treat this case. But, the idea is the same. The outer select would be:
\nselect t.thetime, max(cumstarts - cumends) as numactives\n
\nand you need a group by clause:
\ngroup by t.thetime\n
\nThe "max" gives the starts precedence (meaning with the same time stampt, the starts are treated as happening first, so you get the maximum actives at that time). "Min" would give the ends precedence. And, if you use average, remember to convert to floating point:
\nselect t.thetime, avg(cumstarts*1.0 - cumends) as avgnumactives\n
\n
soup wrap:
The following uses correlated subqueries to get the numbers you want. The idea is to count the number of cumulative starts and cumulative ends, up to each time:
with alltimes as
(select t.*
from ((select part_start_time as thetime, 1 as IsStart, 0 as IsEnd
from t
) union all
(select part_end_time, 0 as isStart, 1 as IsEnd
from t
)
) t
)
select t.*,
(cumstarts - cumends) as numactive
from (select alltimes.thetime,
(select sum(isStart)
from allStarts as where as.part_start_time <= alltimes.thetime
) as cumStarts,
(select sum(isEnd)
from allStarts as where as.part_end_time <= alltimes.thetime
) as cumEnds
from alltimes
) t
The output is based on each time present in the data.
As a rule of thumb, you don't want to be doing lots of data work on the application side. When possible, that is best done in the database.
This query will have duplicates when there are multiple starts and ends at the same time. In this case, you would need to determine how to treat this case. But, the idea is the same. The outer select would be:
select t.thetime, max(cumstarts - cumends) as numactives
and you need a group by clause:
group by t.thetime
The "max" gives the starts precedence (meaning with the same time stampt, the starts are treated as happening first, so you get the maximum actives at that time). "Min" would give the ends precedence. And, if you use average, remember to convert to floating point:
select t.thetime, avg(cumstarts*1.0 - cumends) as avgnumactives
qid & accept id:
(11900470, 12310205)
query:
Oracle: subtract millisecond from a datetime
soup:
For adding or subtracting an amount of time expressed as a literal you can use INTERVAL.
\nSELECT TO_TIMESTAMP('10/08/2012','DD/MM/YYYY')\n - INTERVAL '0.001' SECOND \nFROM dual;\n
\nAs well there are now standard ways to express date and time literals and avoid the use of various database specific conversion functions.
\nSELECT TIMESTAMP '2012-10-08 00:00:00' \n - INTERVAL '0.001' SECOND DATA\nFROM dual;\n
\nFor your original question the time part of a day is stored in fractional days. So one second is:
\n1 / (hours in day * minutes in hour * seconds in a minute)\n
\nDivide by 1000 to get milliseconds.
\n1 / (24 * 60 * 60 * 1000)\n
\n
soup wrap:
For adding or subtracting an amount of time expressed as a literal you can use INTERVAL.
SELECT TO_TIMESTAMP('10/08/2012','DD/MM/YYYY')
- INTERVAL '0.001' SECOND
FROM dual;
As well there are now standard ways to express date and time literals and avoid the use of various database specific conversion functions.
SELECT TIMESTAMP '2012-10-08 00:00:00'
- INTERVAL '0.001' SECOND DATA
FROM dual;
For your original question the time part of a day is stored in fractional days. So one second is:
1 / (hours in day * minutes in hour * seconds in a minute)
Divide by 1000 to get milliseconds.
1 / (24 * 60 * 60 * 1000)
qid & accept id:
(11912188, 11925465)
query:
Smart SQL Merge - n rows, coalesce
soup:
If the performance is important enough to justify a couple of hours of coding and you are allowed to use SQLCLR, you can calculate all values in a single table scan with multi-parameter User Defined Aggregare.
\nHere's an example of an aggregate that returns lowest-ranked non-NULL string:
\nusing System;\nusing System.Data;\nusing System.Data.SqlClient;\nusing System.Data.SqlTypes;\nusing System.IO;\nusing Microsoft.SqlServer.Server;\n\n[Serializable]\n[SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = -1, IsNullIfEmpty = true)]\npublic struct LowestRankString : IBinarySerialize\n{\n public int currentRank;\n public SqlString currentValue;\n\n public void Init()\n {\n currentRank = int.MaxValue;\n currentValue = SqlString.Null;\n }\n\n public void Accumulate(int Rank, SqlString Value)\n {\n if (!Value.IsNull)\n {\n if (Rank <= currentRank)\n {\n currentRank = Rank;\n currentValue = Value;\n }\n }\n }\n\n public void Merge(LowestRankString Group)\n {\n Accumulate(Group.currentRank, Group.currentValue);\n }\n\n public SqlString Terminate()\n {\n return currentValue;\n }\n\n public void Read(BinaryReader r)\n {\n currentRank = r.ReadInt32();\n bool hasValue = r.ReadBoolean();\n if (hasValue)\n {\n currentValue = new SqlString(r.ReadString());\n }\n else\n {\n currentValue = SqlString.Null;\n }\n }\n\n public void Write(BinaryWriter w)\n {\n w.Write(currentRank);\n\n bool hasValue = !currentValue.IsNull;\n w.Write(hasValue);\n if (hasValue)\n {\n w.Write(currentValue.Value);\n }\n }\n}\n
\nAssuming your table looks something like this:
\nCREATE TABLE TopNonNullRank (\n Id INT NOT NULL,\n UserId NVARCHAR (32) NOT NULL,\n Value1 NVARCHAR (128) NULL,\n Value2 NVARCHAR (128) NULL,\n Value3 NVARCHAR (128) NULL,\n Value4 NVARCHAR (128) NULL,\n PRIMARY KEY CLUSTERED (Id ASC)\n);
\nINSERT INTO TopNonNullRank (Id, UserId, Value1, Value2, Value3, Value4) VALUES \n (1, N'Ada', NULL, N'Top value 2 for A', N'Top value 3 for A', NULL),\n (2, N'Ada', N'Top value 1 for A', NULL, N'Other value 3', N'Top value 4 for A'),\n (3, N'Ada', N'Other value 1 for A', N'Other value 2 for A', N'Other value 3 for A', NULL),\n (4, N'Bob', N'Top value 1 for B', NULL, NULL, NULL),\n (5, N'Bob', NULL, NULL, NULL, N'Top value 4 for B'),\n (6, N'Bob', N'Other value 1 for B', N'Top value 2 for B', NULL, N'Other value 4');\n
\nThe following simple query returns top non-NULL value for each column.
\nSELECT \n UserId,\n dbo.LowestRankString(Id, Value1) AS TopValue1,\n dbo.LowestRankString(Id, Value2) AS TopValue2,\n dbo.LowestRankString(Id, Value3) AS TopValue3,\n dbo.LowestRankString(Id, Value4) AS TopValue4\nFROM TopNonNullRank\nGROUP BY UserId\n
\nThe only thing left is merging the results back to the original table. The simplest way would be something like this:
\nWITH TopValuesPerUser AS\n(\n SELECT \n UserId,\n dbo.LowestRankString(Id, Value1) AS TopValue1,\n dbo.LowestRankString(Id, Value2) AS TopValue2,\n dbo.LowestRankString(Id, Value3) AS TopValue3,\n dbo.LowestRankString(Id, Value4) AS TopValue4\n FROM TopNonNullRank\n GROUP BY UserId\n)\nUPDATE TopNonNullRank\nSET\n Value1 = TopValue1,\n Value2 = TopValue2,\n Value3 = TopValue3,\n Value4 = TopValue4\nFROM TopNonNullRank AS OriginalTable\nLEFT JOIN TopValuesPerUser ON TopValuesPerUser.UserId = OriginalTable.UserId;\n
\nNote that this update still leaves you with duplicate rows, and you would need to get rid of them.
\nYou could also get more fancy and store the results of this query into a temporary table, and then use MERGE statement to apply them to the original table.
\nAnother option would be to store the results in a new table, and then swap it with the original table using sp_rename stored proc.
\n
soup wrap:
If the performance is important enough to justify a couple of hours of coding and you are allowed to use SQLCLR, you can calculate all values in a single table scan with multi-parameter User Defined Aggregare.
Here's an example of an aggregate that returns lowest-ranked non-NULL string:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using System.IO;
using Microsoft.SqlServer.Server;
[Serializable]
[SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = -1, IsNullIfEmpty = true)]
public struct LowestRankString : IBinarySerialize
{
public int currentRank;
public SqlString currentValue;
public void Init()
{
currentRank = int.MaxValue;
currentValue = SqlString.Null;
}
public void Accumulate(int Rank, SqlString Value)
{
if (!Value.IsNull)
{
if (Rank <= currentRank)
{
currentRank = Rank;
currentValue = Value;
}
}
}
public void Merge(LowestRankString Group)
{
Accumulate(Group.currentRank, Group.currentValue);
}
public SqlString Terminate()
{
return currentValue;
}
public void Read(BinaryReader r)
{
currentRank = r.ReadInt32();
bool hasValue = r.ReadBoolean();
if (hasValue)
{
currentValue = new SqlString(r.ReadString());
}
else
{
currentValue = SqlString.Null;
}
}
public void Write(BinaryWriter w)
{
w.Write(currentRank);
bool hasValue = !currentValue.IsNull;
w.Write(hasValue);
if (hasValue)
{
w.Write(currentValue.Value);
}
}
}
Assuming your table looks something like this:
CREATE TABLE TopNonNullRank (
Id INT NOT NULL,
UserId NVARCHAR (32) NOT NULL,
Value1 NVARCHAR (128) NULL,
Value2 NVARCHAR (128) NULL,
Value3 NVARCHAR (128) NULL,
Value4 NVARCHAR (128) NULL,
PRIMARY KEY CLUSTERED (Id ASC)
);
INSERT INTO TopNonNullRank (Id, UserId, Value1, Value2, Value3, Value4) VALUES
(1, N'Ada', NULL, N'Top value 2 for A', N'Top value 3 for A', NULL),
(2, N'Ada', N'Top value 1 for A', NULL, N'Other value 3', N'Top value 4 for A'),
(3, N'Ada', N'Other value 1 for A', N'Other value 2 for A', N'Other value 3 for A', NULL),
(4, N'Bob', N'Top value 1 for B', NULL, NULL, NULL),
(5, N'Bob', NULL, NULL, NULL, N'Top value 4 for B'),
(6, N'Bob', N'Other value 1 for B', N'Top value 2 for B', NULL, N'Other value 4');
The following simple query returns top non-NULL value for each column.
SELECT
UserId,
dbo.LowestRankString(Id, Value1) AS TopValue1,
dbo.LowestRankString(Id, Value2) AS TopValue2,
dbo.LowestRankString(Id, Value3) AS TopValue3,
dbo.LowestRankString(Id, Value4) AS TopValue4
FROM TopNonNullRank
GROUP BY UserId
The only thing left is merging the results back to the original table. The simplest way would be something like this:
WITH TopValuesPerUser AS
(
SELECT
UserId,
dbo.LowestRankString(Id, Value1) AS TopValue1,
dbo.LowestRankString(Id, Value2) AS TopValue2,
dbo.LowestRankString(Id, Value3) AS TopValue3,
dbo.LowestRankString(Id, Value4) AS TopValue4
FROM TopNonNullRank
GROUP BY UserId
)
UPDATE TopNonNullRank
SET
Value1 = TopValue1,
Value2 = TopValue2,
Value3 = TopValue3,
Value4 = TopValue4
FROM TopNonNullRank AS OriginalTable
LEFT JOIN TopValuesPerUser ON TopValuesPerUser.UserId = OriginalTable.UserId;
Note that this update still leaves you with duplicate rows, and you would need to get rid of them.
You could also get more fancy and store the results of this query into a temporary table, and then use MERGE statement to apply them to the original table.
Another option would be to store the results in a new table, and then swap it with the original table using sp_rename stored proc.
qid & accept id:
(11960289, 11960469)
query:
Access SQL update based on count results and conditional update
soup:
You can wrap a query in another query:
\nSELECT TechID, Rank FROM Rank,\n(SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, \n tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing\nFROM tblEmployeeData\nINNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID\nWHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date())\n AND ((Exists \n (SELECT * FROM tblOccurrence AS y WHERE y.TechID = x.TechID AND DATEADD \n ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))\nGROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr) a\nWHERE a.Cnt BETWEEN Rank.Low And rank.High\n
\nThe idea is that you use the query with a Rank table, like so:
\nLow High Rank\n0 3 Good\n4 5 Verbal Warning\n6 7 Written Warning\n8 8 Final Written Warning\n9 99 Termination\n
\nEdit re comments
\nThis runs for me in a rough mock-up
\nSELECT a.TechID, tblRank.Rank FROM tblRank, (SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, \n tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing\nFROM tblEmployeeData\nINNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID\nWHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date()) AND ((Exists \n (SELECT * FROM tblOccurrence AS y WHERE y.TechID = x.TechID AND DATEADD \n ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))\nGROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing) a\nWHERE a.Cnt BETWEEN tblRank.Low And tblrank.High\n
\n
soup wrap:
You can wrap a query in another query:
SELECT TechID, Rank FROM Rank,
(SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName,
tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing
FROM tblEmployeeData
INNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID
WHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date())
AND ((Exists
(SELECT * FROM tblOccurrence AS y WHERE y.TechID = x.TechID AND DATEADD
("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))
GROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr) a
WHERE a.Cnt BETWEEN Rank.Low And rank.High
The idea is that you use the query with a Rank table, like so:
Low High Rank
0 3 Good
4 5 Verbal Warning
6 7 Written Warning
8 8 Final Written Warning
9 99 Termination
Edit re comments
This runs for me in a rough mock-up
SELECT a.TechID, tblRank.Rank FROM tblRank, (SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName,
tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing
FROM tblEmployeeData
INNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID
WHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date()) AND ((Exists
(SELECT * FROM tblOccurrence AS y WHERE y.TechID = x.TechID AND DATEADD
("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))
GROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing) a
WHERE a.Cnt BETWEEN tblRank.Low And tblrank.High
qid & accept id:
(11969118, 11969295)
query:
SQLite: How to get certain field from multiple tables?
soup:
When you UNION results together, the column takes the name given to it in the first query (in this case A_name)
\nInstead of using UNION ALL, try joining your tables together:
\nSELECT A.A_name, B.B_name, C.C_name\nFROM TableA A\n INNER JOIN TableB B ON A.companyId = B.companyId\n INNER JOIN TableC C ON A.companyId = C.companyId\nWHERE A.companyId = 1\n
\nThis will give you the results on a single row. If you really want the results as seperate rows, you could perhaps select the table name along with the *_name field:
\nSELECT 'TableA' AS TableName, A_name FROM TableA WHERE companyId = 1 UNION ALL\nSELECT 'TableB', B_name FROM TableB WHERE companyId = 1 UNION ALL\nSELECT 'TableC', C_name FROM TableC WHERE companyId = 1\n
\n
soup wrap:
When you UNION results together, the column takes the name given to it in the first query (in this case A_name)
Instead of using UNION ALL, try joining your tables together:
SELECT A.A_name, B.B_name, C.C_name
FROM TableA A
INNER JOIN TableB B ON A.companyId = B.companyId
INNER JOIN TableC C ON A.companyId = C.companyId
WHERE A.companyId = 1
This will give you the results on a single row. If you really want the results as seperate rows, you could perhaps select the table name along with the *_name field:
SELECT 'TableA' AS TableName, A_name FROM TableA WHERE companyId = 1 UNION ALL
SELECT 'TableB', B_name FROM TableB WHERE companyId = 1 UNION ALL
SELECT 'TableC', C_name FROM TableC WHERE companyId = 1
qid & accept id:
(12013073, 12013200)
query:
Extract time from datetime efficiently (as decimal or datetime)
soup:
To get a datetime:
\nSELECT GetDate() - DateDiff(day, 0, GetDate());\n-- returns the time with zero as the datetime part (1900-01-01).\n
\nAnd to get a number representing the time:
\nSELECT DateDiff(millisecond, DateDiff(day, 0, GetDate()), GetDate());\n-- time since midnight in milliseconds, use as you wish\n
\nIf you really want a string, then:
\nSELECT Convert(varchar(8), GetDate(), 108); -- 'hh:mm:ss'\nSELECT Convert(varchar(12), GetDate(), 114); -- 'hh:mm:ss.nnn' where nnn is milliseconds\n
\n
soup wrap:
To get a datetime:
SELECT GetDate() - DateDiff(day, 0, GetDate());
-- returns the time with zero as the datetime part (1900-01-01).
And to get a number representing the time:
SELECT DateDiff(millisecond, DateDiff(day, 0, GetDate()), GetDate());
-- time since midnight in milliseconds, use as you wish
If you really want a string, then:
SELECT Convert(varchar(8), GetDate(), 108); -- 'hh:mm:ss'
SELECT Convert(varchar(12), GetDate(), 114); -- 'hh:mm:ss.nnn' where nnn is milliseconds
qid & accept id:
(12050795, 12050859)
query:
How to remove null values from a count function
soup:
The problem is that you return all rows in table a1_publisher. Try this instead.
\nselect j.publisher_id, count(j.publisher_id)\nFROM a1_journal j inner join a1_publisher p ON j.publisher_id=p.publisher_id \nGROUP BY j.publisher_id\nHAVING count(j.publisher_id) >=3\nORDER BY count(j.publisher_id) DESC\n
\nUPDATE:
\nTo select publisher's name there're 2 ways.
\n\nIf publisher's name is unique you can add the column to group by like this
\nselect j.publisher_id,p.publisher_name, count(j.publisher_id)\nFROM a1_journal j \n inner join a1_publisher p ON j.publisher_id=p.publisher_id \nGROUP BY j.publisher_id, p.publisher_name\nHAVING count(j.publisher_id) >=3\nORDER BY count(j.publisher_id) DESC\n
\nIf it's not unique, you should have another join with a1_publisher like this.
\nSELECT aj.publisher_id, aj.numberOfJournals, ap.publisher_name\nFROM a1_publisher ap \nINNER JOIN (\n SELECT j.publisher_id, count(j.publisher_id) numberOfJournals\n FROM a1_journal j \n inner join a1_publisher p ON j.publisher_id=p.publisher_id \n GROUP BY j.publisher_id\n HAVING count(j.publisher_id) >=3 ) aj \nON ap.publisher_id = ap.publisher_id\nORDER BY count(j.publisher_id) DESC\n
\n
\n
soup wrap:
The problem is that you return all rows in table a1_publisher. Try this instead.
select j.publisher_id, count(j.publisher_id)
FROM a1_journal j inner join a1_publisher p ON j.publisher_id=p.publisher_id
GROUP BY j.publisher_id
HAVING count(j.publisher_id) >=3
ORDER BY count(j.publisher_id) DESC
UPDATE:
To select publisher's name there're 2 ways.
If publisher's name is unique you can add the column to group by like this
select j.publisher_id,p.publisher_name, count(j.publisher_id)
FROM a1_journal j
inner join a1_publisher p ON j.publisher_id=p.publisher_id
GROUP BY j.publisher_id, p.publisher_name
HAVING count(j.publisher_id) >=3
ORDER BY count(j.publisher_id) DESC
If it's not unique, you should have another join with a1_publisher like this.
SELECT aj.publisher_id, aj.numberOfJournals, ap.publisher_name
FROM a1_publisher ap
INNER JOIN (
SELECT j.publisher_id, count(j.publisher_id) numberOfJournals
FROM a1_journal j
inner join a1_publisher p ON j.publisher_id=p.publisher_id
GROUP BY j.publisher_id
HAVING count(j.publisher_id) >=3 ) aj
ON ap.publisher_id = ap.publisher_id
ORDER BY count(j.publisher_id) DESC
qid & accept id:
(12063841, 12063860)
query:
Display value from column B if column A is NULL
soup:
Use ISNULL() or COALESCE(), or CASE
\nSELECT ISNULL(ColumnA, ColumnB) AS [YourColumn]\nFROM FOO\n
\nOR
\nSELECT COALESCE(ColumnA, ColumnB) AS [YourColumn]\nFROM FOO\n
\nOR
\nSELECT CASE WHEN ColumnA IS NULL THEN\n ColumnB\n ELSE\n ColumnA\n END AS [YourColumn]\nFROM FOO\n
\n
soup wrap:
Use ISNULL() or COALESCE(), or CASE
SELECT ISNULL(ColumnA, ColumnB) AS [YourColumn]
FROM FOO
OR
SELECT COALESCE(ColumnA, ColumnB) AS [YourColumn]
FROM FOO
OR
SELECT CASE WHEN ColumnA IS NULL THEN
ColumnB
ELSE
ColumnA
END AS [YourColumn]
FROM FOO
qid & accept id:
(12085307, 12087577)
query:
sec_to_time() function in PostgreSQL?
soup:
Use to_char:
\nregress=# SELECT to_char( (9999999 ||' seconds')::interval, 'HH24:MM:SS' );\n to_char \n------------\n 2777:00:39\n(1 row)\n
\nHere's a function that produces a text formatted value:
\nCREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS text AS $$\nSELECT to_char( ($1|| ' seconds')::interval, 'HH24:MI:SS');\n$$ LANGUAGE 'SQL' IMMUTABLE;\n
\neg:
\nregress=# SELECT sec_to_time(9999999);\n sec_to_time \n-------------\n 2777:00:39\n(1 row)\n
\nIf you'd prefer an INTERVAL result, use:
\nCREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS interval AS $$\nSELECT justify_interval( ($1|| ' seconds')::interval);\n$$ LANGUAGE 'SQL' IMMUTABLE;\n
\n... which will produce results like:
\nSELECT sec_to_time(9999999);\n sec_to_time \n-------------------------\n 3 mons 25 days 17:46:39\n(1 row)\n
\nDon't cast an INTERVAL to TIME though; it'll discard the days part. Use to_char(theinterval, 'HH24:MI:SS) to convert it to text without truncation instead.
\n
soup wrap:
Use to_char:
regress=# SELECT to_char( (9999999 ||' seconds')::interval, 'HH24:MM:SS' );
to_char
------------
2777:00:39
(1 row)
Here's a function that produces a text formatted value:
CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS text AS $$
SELECT to_char( ($1|| ' seconds')::interval, 'HH24:MI:SS');
$$ LANGUAGE 'SQL' IMMUTABLE;
eg:
regress=# SELECT sec_to_time(9999999);
sec_to_time
-------------
2777:00:39
(1 row)
If you'd prefer an INTERVAL result, use:
CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS interval AS $$
SELECT justify_interval( ($1|| ' seconds')::interval);
$$ LANGUAGE 'SQL' IMMUTABLE;
... which will produce results like:
SELECT sec_to_time(9999999);
sec_to_time
-------------------------
3 mons 25 days 17:46:39
(1 row)
Don't cast an INTERVAL to TIME though; it'll discard the days part. Use to_char(theinterval, 'HH24:MI:SS) to convert it to text without truncation instead.
qid & accept id:
(12088243, 12110722)
query:
ActiveX calling URL page
soup:
Following up on the suggestion by @Ted, you can also fetch a URL using native Microsoft capabilities in an in-process fashion. You can do this via a component known as WinHTTP (the latest appears to be WinHTTP 5.1).
\nSee my script below which includes a function to simply obtain the status of a URL. When I run this script I get the following output:
\nhttp://www.google.com => 200 [OK]\nhttp://www.google.com/does_not_exist => 404 [Not Found]\nhttp://does_not_exist.google.com => -2147012889\n [The server name or address could not be resolved]\n
\nIf you want the actual content behind a URL, try oHttp.ResponseText. Here's the WinHTTP reference if you are interested in other capabilities as well.
\nOption Explicit\n\nDim aUrlList\naUrlList = Array( _\n "http://www.google.com", _\n "http://www.google.com/does_not_exist", _\n "http://does_not_exist.google.com" _\n)\n\nDim i\nFor i = 0 To UBound(aUrlList)\n WScript.Echo aUrlList(i) & " => " & GetUrlStatus(aUrlList(i))\nNext\n\nFunction GetUrlStatus(sUrl)\n Dim oHttp : Set oHttp = CreateObject("WinHttp.WinHttpRequest.5.1")\n\n On Error Resume Next\n\n With oHttp\n .Open "GET", SUrl, False\n .Send\n End With\n\n If Err Then\n GetUrlStatus = Err.Number & " [" & Err.Description & "]"\n Else\n GetUrlStatus = oHttp.Status & " [" & oHttp.StatusText & "]"\n End If\n\n Set oHttp = Nothing\nEnd Function\n
\n
soup wrap:
Following up on the suggestion by @Ted, you can also fetch a URL using native Microsoft capabilities in an in-process fashion. You can do this via a component known as WinHTTP (the latest appears to be WinHTTP 5.1).
See my script below which includes a function to simply obtain the status of a URL. When I run this script I get the following output:
http://www.google.com => 200 [OK]
http://www.google.com/does_not_exist => 404 [Not Found]
http://does_not_exist.google.com => -2147012889
[The server name or address could not be resolved]
If you want the actual content behind a URL, try oHttp.ResponseText. Here's the WinHTTP reference if you are interested in other capabilities as well.
Option Explicit
Dim aUrlList
aUrlList = Array( _
"http://www.google.com", _
"http://www.google.com/does_not_exist", _
"http://does_not_exist.google.com" _
)
Dim i
For i = 0 To UBound(aUrlList)
WScript.Echo aUrlList(i) & " => " & GetUrlStatus(aUrlList(i))
Next
Function GetUrlStatus(sUrl)
Dim oHttp : Set oHttp = CreateObject("WinHttp.WinHttpRequest.5.1")
On Error Resume Next
With oHttp
.Open "GET", SUrl, False
.Send
End With
If Err Then
GetUrlStatus = Err.Number & " [" & Err.Description & "]"
Else
GetUrlStatus = oHttp.Status & " [" & oHttp.StatusText & "]"
End If
Set oHttp = Nothing
End Function
qid & accept id:
(12133106, 12133149)
query:
MySQL - How do I compare two columns for repeated values?
soup:
This should work,
\nSelect f1.FRIEND_ID,f1.FRIEND_NAME from \nFRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and \nf1.id=1 and f2.id=2\n
\nhere is the sample:\nhttp://sqlfiddle.com/#!2/c9f36/1/0
\nalso if you want to get all people having common friends try this
\nSelect f1.FRIEND_ID,f1.FRIEND_NAME,f1.id 'first person',f2.id as 'second person' from \nFRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and \nf1.id<>f2.id and f1.id
\nthis will return two people having same friends per row: http://sqlfiddle.com/#!2/c9f36/2/0
\n
soup wrap:
This should work,
Select f1.FRIEND_ID,f1.FRIEND_NAME from
FRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and
f1.id=1 and f2.id=2
here is the sample:
http://sqlfiddle.com/#!2/c9f36/1/0
also if you want to get all people having common friends try this
Select f1.FRIEND_ID,f1.FRIEND_NAME,f1.id 'first person',f2.id as 'second person' from
FRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and
f1.id<>f2.id and f1.id
this will return two people having same friends per row: http://sqlfiddle.com/#!2/c9f36/2/0
qid & accept id:
(12151979, 12152003)
query:
Have an array in a SQL field. How to display it systematically?
soup:
You can use FIND_IN_SET() function for that.
\nExample you have record like this
\nOrders Table\n------------------------------------\nOrderID | attachedCompanyIDs\n------------------------------------\n 1 1,2,3 -- comma separated values\n 2 2,4 \n
\nand
\nCompany Table\n--------------------------------------\nCompanyID | name\n--------------------------------------\n 1 Company 1\n 2 Another Company\n 3 StackOverflow\n 4 Nothing\n
\nUsing the function
\nSELECT name \nFROM orders, company\nWHERE orderID = 1 AND FIND_IN_SET(companyID, attachedCompanyIDs)\n
\nwill result
\nname\n---------------\nCompany 1\nAnother Company\nStackOverflow\n
\n
soup wrap:
You can use FIND_IN_SET() function for that.
Example you have record like this
Orders Table
------------------------------------
OrderID | attachedCompanyIDs
------------------------------------
1 1,2,3 -- comma separated values
2 2,4
and
Company Table
--------------------------------------
CompanyID | name
--------------------------------------
1 Company 1
2 Another Company
3 StackOverflow
4 Nothing
Using the function
SELECT name
FROM orders, company
WHERE orderID = 1 AND FIND_IN_SET(companyID, attachedCompanyIDs)
will result
name
---------------
Company 1
Another Company
StackOverflow
qid & accept id:
(12160776, 12161066)
query:
SQL cumulative % Total
soup:
I think you're looking for something like this, though your example calculations may be off a little:
\nSELECT\n COLA,\n COLB,\n ROUND(\n -- Divide the running total...\n (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable WHERE COLA <= a.COLA) /\n -- ...by the full total\n (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable),\n 2\n ) AS COLC\nFROM #MyTempTable AS a\nORDER BY COLA\n
\nEDIT: I've added rounding.
\nThis gives us the following output:
\nCOLA COLB COLC\nName1 218 0.35\nName2 157 0.6\nName3 134 0.81\nName4 121 1\n
\nThe reason that your results are 0 (or 1) is because you are dividing ints by ints, thus giving you an int (see Datatype precedence).
\nUPDATE:
\nI should add that this uses a "triangular join" to get the running total (WHERE COLA <= a.COLA). Depending upon your SQL Server version, you may compare this to other options if performance becomes a concern.
\n
soup wrap:
I think you're looking for something like this, though your example calculations may be off a little:
SELECT
COLA,
COLB,
ROUND(
-- Divide the running total...
(SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable WHERE COLA <= a.COLA) /
-- ...by the full total
(SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable),
2
) AS COLC
FROM #MyTempTable AS a
ORDER BY COLA
EDIT: I've added rounding.
This gives us the following output:
COLA COLB COLC
Name1 218 0.35
Name2 157 0.6
Name3 134 0.81
Name4 121 1
The reason that your results are 0 (or 1) is because you are dividing ints by ints, thus giving you an int (see Datatype precedence).
UPDATE:
I should add that this uses a "triangular join" to get the running total (WHERE COLA <= a.COLA). Depending upon your SQL Server version, you may compare this to other options if performance becomes a concern.
qid & accept id:
(12175474, 12175510)
query:
simple flow control with mysql
soup:
UPDATE A SET act=now() WHERE id=1 AND act_reset <> 0\n
\nIs this the query you are looking for?
\nUsing If statement in MySQL :
\nIF act_reset <> 0 THEN \n UPDATE A SET act=now() WHERE id=1 \nEND IF; \n
\n
soup wrap:
UPDATE A SET act=now() WHERE id=1 AND act_reset <> 0
Is this the query you are looking for?
Using If statement in MySQL :
IF act_reset <> 0 THEN
UPDATE A SET act=now() WHERE id=1
END IF;
qid & accept id:
(12221037, 12221580)
query:
How can I query row data as columns?
soup:
You can do an UNPIVOT and then a PIVOT of the data. this can be done either statically or dynamically:
\nStatic Version:
\nselect *\nfrom\n(\n select fk, col + cast(rownumber as varchar(1)) new_col,\n val\n from \n (\n select fk, rownumber, value, cast(type as varchar(10)) type,\n status\n from yourtable\n ) x\n unpivot\n (\n val\n for col in (value, type, status)\n ) u\n) x1\npivot\n(\n max(val)\n for new_col in\n ([value1], [type1], [status1], \n [value2], [type2], [status2],\n [value3], [type3])\n) p\n
\n\nDynamic Version, this will get the list of columns to unpivot and then to pivot at run-time:
\nDECLARE @colsUnpivot AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX),\n @colsPivot as NVARCHAR(MAX)\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('yourtable') and\n C.name not in ('fk', 'rownumber')\n for xml path('')), 1, 1, '')\n\nselect @colsPivot = STUFF((SELECT ',' \n + quotename(c.name \n + cast(t.rownumber as varchar(10)))\n from yourtable t\n cross apply \n sys.columns as C\n where C.object_id = object_id('yourtable') and\n C.name not in ('fk', 'rownumber')\n group by c.name, t.rownumber\n order by t.rownumber\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\n\nset @query \n = 'select *\n from\n (\n select fk, col + cast(rownumber as varchar(10)) new_col,\n val\n from \n (\n select fk, rownumber, value, cast(type as varchar(10)) type,\n status\n from yourtable\n ) x\n unpivot\n (\n val\n for col in ('+ @colsunpivot +')\n ) u\n ) x1\n pivot\n (\n max(val)\n for new_col in\n ('+ @colspivot +')\n ) p'\n\nexec(@query)\n
\n\nBoth will generate the same results, however the dynamic is great if you do not know the number of columns ahead of time.
\nThe Dynamic version is working under the assumption that the rownumber is already a part of the dataset.
\n
soup wrap:
You can do an UNPIVOT and then a PIVOT of the data. this can be done either statically or dynamically:
Static Version:
select *
from
(
select fk, col + cast(rownumber as varchar(1)) new_col,
val
from
(
select fk, rownumber, value, cast(type as varchar(10)) type,
status
from yourtable
) x
unpivot
(
val
for col in (value, type, status)
) u
) x1
pivot
(
max(val)
for new_col in
([value1], [type1], [status1],
[value2], [type2], [status2],
[value3], [type3])
) p
Dynamic Version, this will get the list of columns to unpivot and then to pivot at run-time:
DECLARE @colsUnpivot AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX),
@colsPivot as NVARCHAR(MAX)
select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('yourtable') and
C.name not in ('fk', 'rownumber')
for xml path('')), 1, 1, '')
select @colsPivot = STUFF((SELECT ','
+ quotename(c.name
+ cast(t.rownumber as varchar(10)))
from yourtable t
cross apply
sys.columns as C
where C.object_id = object_id('yourtable') and
C.name not in ('fk', 'rownumber')
group by c.name, t.rownumber
order by t.rownumber
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query
= 'select *
from
(
select fk, col + cast(rownumber as varchar(10)) new_col,
val
from
(
select fk, rownumber, value, cast(type as varchar(10)) type,
status
from yourtable
) x
unpivot
(
val
for col in ('+ @colsunpivot +')
) u
) x1
pivot
(
max(val)
for new_col in
('+ @colspivot +')
) p'
exec(@query)
Both will generate the same results, however the dynamic is great if you do not know the number of columns ahead of time.
The Dynamic version is working under the assumption that the rownumber is already a part of the dataset.
qid & accept id:
(12248899, 12249175)
query:
How can i concatenate and make a group of text in sql server?
soup:
Here, try this one,
\nSELECT a.dept_id, \n NewTable.NameValues\nFROM (\n SELECT DISTINCT dept_ID\n FROM tableA\n ) a \n LEFT JOIN\n (\n SELECT dept_id,\n STUFF((\n SELECT ', ' + [Name] \n FROM tableA\n WHERE ( dept_id = Results.dept_id )\n FOR XML PATH('')), 1, 1, '') AS NameValues\n FROM tableA Results\n GROUP BY dept_id\n ) NewTable\n on a.dept_id = NewTable.dept_id\nGO\n
\nSQLFiddle Demo
\nHEre's another version
\nSELECT a.dept_id, \n SUBSTRING(d.nameList,1, LEN(d.nameList) - 1) ConcatenateNames\nFROM \n (\n SELECT DISTINCT dept_id\n FROM tableA\n ) a\n CROSS APPLY\n (\n SELECT name + ', ' \n FROM tableA AS B \n WHERE A.dept_id = B.dept_id \n FOR XML PATH('')\n ) D (nameList)\nGO\n
\nSQLFiddle Demo
\n
soup wrap:
Here, try this one,
SELECT a.dept_id,
NewTable.NameValues
FROM (
SELECT DISTINCT dept_ID
FROM tableA
) a
LEFT JOIN
(
SELECT dept_id,
STUFF((
SELECT ', ' + [Name]
FROM tableA
WHERE ( dept_id = Results.dept_id )
FOR XML PATH('')), 1, 1, '') AS NameValues
FROM tableA Results
GROUP BY dept_id
) NewTable
on a.dept_id = NewTable.dept_id
GO
SQLFiddle Demo
HEre's another version
SELECT a.dept_id,
SUBSTRING(d.nameList,1, LEN(d.nameList) - 1) ConcatenateNames
FROM
(
SELECT DISTINCT dept_id
FROM tableA
) a
CROSS APPLY
(
SELECT name + ', '
FROM tableA AS B
WHERE A.dept_id = B.dept_id
FOR XML PATH('')
) D (nameList)
GO
SQLFiddle Demo
qid & accept id:
(12250195, 12250216)
query:
How can I update more than one record in MS SQL?
soup:
For the first one would be:
\nUPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId = 1000;\n
\nFor the second one:
\nUPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId = 1001;\n
\nFor both of them:
\nUPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId IN (1000,1001);\n
\n
soup wrap:
For the first one would be:
UPDATE Stackoverflow
Set StateId = 1
where GeneralId = 1000;
For the second one:
UPDATE Stackoverflow
Set StateId = 1
where GeneralId = 1001;
For both of them:
UPDATE Stackoverflow
Set StateId = 1
where GeneralId IN (1000,1001);
qid & accept id:
(12251993, 12252082)
query:
Dumping sqlite3 database for use in Titanium
soup:
Why do you dump the database file when you can simply copy it, i.e. use it as it is?
\nAs explained here, sqlite databases are cross-platform:
\n\nA database in SQLite is a single disk file. Furthermore, the file\n format is cross-platform. A database that is created on one machine\n can be copied and used on a different machine with a different\n architecture. SQLite databases are portable across 32-bit and 64-bit\n machines and between big-endian and little-endian architectures.
\n
\nOn the other hand, you should be able to dump, compress you database like this:
\necho '.dump' | sqlite3 foo.db | gzip -c > foo.dump.gz\n
\nand restore it in a new SQLite database:
\ngunzip -c foo.dump.gz | sqlite3 foo.new.db\n
\n
soup wrap:
Why do you dump the database file when you can simply copy it, i.e. use it as it is?
As explained here, sqlite databases are cross-platform:
A database in SQLite is a single disk file. Furthermore, the file
format is cross-platform. A database that is created on one machine
can be copied and used on a different machine with a different
architecture. SQLite databases are portable across 32-bit and 64-bit
machines and between big-endian and little-endian architectures.
On the other hand, you should be able to dump, compress you database like this:
echo '.dump' | sqlite3 foo.db | gzip -c > foo.dump.gz
and restore it in a new SQLite database:
gunzip -c foo.dump.gz | sqlite3 foo.new.db
qid & accept id:
(12265411, 12265431)
query:
How can I tell if a VARCHAR variable contains a substring?
soup:
The standard SQL way is to use like:
\nwhere @stringVar like '%thisstring%'\n
\nThat is in a query statement. You can also do this in TSQL:
\nif @stringVar like '%thisstring%'\n
\n
soup wrap:
The standard SQL way is to use like:
where @stringVar like '%thisstring%'
That is in a query statement. You can also do this in TSQL:
if @stringVar like '%thisstring%'
qid & accept id:
(12335438, 12338490)
query:
Server timezone offset value
soup:
For the time zone you can:
\nSHOW timezone;\n
\nor the equivalent:
\nSELECT current_setting('TIMEZONE');\n
\nbut this can be in any format accepted by the server, so it may return UTC, 08:00, Australia/Victoria, or similar.
\nFrustratingly, there appears to be no built-in function to report the time offset from UTC the client is using in hours and minutes, which seems kind of insane to me. You can get the offset by comparing the current time in UTC to the current time locally:
\nSELECT age(current_timestamp AT TIME ZONE 'UTC', current_timestamp)`\n
\n... but IMO it's cleaner to extract the tz offset in seconds from the current_timestamp and convert to an interval:
\nSELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');\n
\nThat'll match the desired result except that it doesn't produce a leading zero, so -05:00 is just -5:00. Annoyingly it seems to be impossible to get to_char to produce a leading zero for hours, leaving me with the following ugly manual formatting:
\nCREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$\nSELECT to_char(extract(timezone_hour FROM current_timestamp),'FM00')||':'||\n to_char(extract(timezone_minute FROM current_timestamp),'FM00');\n$$ LANGUAGE 'SQL' STABLE;\n
\nCredit to Glenn for timezone_hour and timezone_minute instead of the hack I used earlier with extract(timezone from current_timestamp) * INTERVAL '1' second) and a CTE.
\nIf you don't need the leading zero you can instead use:
\nCREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$\nSELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');\n$$ LANGUAGE 'SQL' STABLE;\n
\nSee also:
\n\n
soup wrap:
For the time zone you can:
SHOW timezone;
or the equivalent:
SELECT current_setting('TIMEZONE');
but this can be in any format accepted by the server, so it may return UTC, 08:00, Australia/Victoria, or similar.
Frustratingly, there appears to be no built-in function to report the time offset from UTC the client is using in hours and minutes, which seems kind of insane to me. You can get the offset by comparing the current time in UTC to the current time locally:
SELECT age(current_timestamp AT TIME ZONE 'UTC', current_timestamp)`
... but IMO it's cleaner to extract the tz offset in seconds from the current_timestamp and convert to an interval:
SELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');
That'll match the desired result except that it doesn't produce a leading zero, so -05:00 is just -5:00. Annoyingly it seems to be impossible to get to_char to produce a leading zero for hours, leaving me with the following ugly manual formatting:
CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$
SELECT to_char(extract(timezone_hour FROM current_timestamp),'FM00')||':'||
to_char(extract(timezone_minute FROM current_timestamp),'FM00');
$$ LANGUAGE 'SQL' STABLE;
Credit to Glenn for timezone_hour and timezone_minute instead of the hack I used earlier with extract(timezone from current_timestamp) * INTERVAL '1' second) and a CTE.
If you don't need the leading zero you can instead use:
CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$
SELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');
$$ LANGUAGE 'SQL' STABLE;
See also:
qid & accept id:
(12366390, 12366471)
query:
How to select product that have the maximum price of each category?
soup:
Try this one if you want to get the whole row,
\n(supports most RDBMS)
\nSELECT a.*\nFROM tbProduct a\n INNER JOIN\n (\n SELECT Category, MAX(Price) maxPrice\n FROM tbProduct\n GROUP BY Category\n ) b ON a.category = b.category AND\n a.price = b.maxPrice\n
\nIf you are using MSSQL 2008+
\nWITH allProducts AS\n(\nSELECT ProductId,ProductName,Category,Price,\n ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM\nFROM tbProduct\n)\nSELECT ProductId,ProductName,Category,Price\nFROM allProducts\nWHERE ROW_NUM = 1\n
\nor
\nSELECT ProductId,ProductName,Category,Price\nFROM \n(\nSELECT ProductId,ProductName,Category,Price,\n ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM\nFROM tbProduct\n) allProducts\nWHERE ROW_NUM = 1\n
\nSQLFiddle Demo
\n
soup wrap:
Try this one if you want to get the whole row,
(supports most RDBMS)
SELECT a.*
FROM tbProduct a
INNER JOIN
(
SELECT Category, MAX(Price) maxPrice
FROM tbProduct
GROUP BY Category
) b ON a.category = b.category AND
a.price = b.maxPrice
If you are using MSSQL 2008+
WITH allProducts AS
(
SELECT ProductId,ProductName,Category,Price,
ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM
FROM tbProduct
)
SELECT ProductId,ProductName,Category,Price
FROM allProducts
WHERE ROW_NUM = 1
or
SELECT ProductId,ProductName,Category,Price
FROM
(
SELECT ProductId,ProductName,Category,Price,
ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM
FROM tbProduct
) allProducts
WHERE ROW_NUM = 1
SQLFiddle Demo
qid & accept id:
(12386646, 12386858)
query:
Execute a result in SQL Server using a stored procedure
soup:
You should use Dynamic SQL, to get running the returned nvarchar(max) query string from the first procedure / query.
\nEdit:
\nDECLARE @ResultOfTheFirstQuery nvarchar(max)\n\nSELECT @ResultOfTheFirstQuery = (Select Top(1)RequiredQuery \n as ReqQry from EPMaster)\n\nexec sp_executeSql @ResultOfTheFirstQuery\n
\nOr if you need a complex logic, you can write an other SP, which can heve a return value:
\nDECLARE @ResultOfTheFirstQuery nvarchar(max)\n\nSELECT @ResultOfTheFirstQuery = FirstStoredprocedure @params\n\nexec sp_executeSql @ResultOfTheFirstQuery\n
\nHere is an already well answered question how to get the paramater return. You can use RETURN or OUTPUT parameter.
\nHere is how to use the sp_executeSql
\n
soup wrap:
You should use Dynamic SQL, to get running the returned nvarchar(max) query string from the first procedure / query.
Edit:
DECLARE @ResultOfTheFirstQuery nvarchar(max)
SELECT @ResultOfTheFirstQuery = (Select Top(1)RequiredQuery
as ReqQry from EPMaster)
exec sp_executeSql @ResultOfTheFirstQuery
Or if you need a complex logic, you can write an other SP, which can heve a return value:
DECLARE @ResultOfTheFirstQuery nvarchar(max)
SELECT @ResultOfTheFirstQuery = FirstStoredprocedure @params
exec sp_executeSql @ResultOfTheFirstQuery
Here is an already well answered question how to get the paramater return. You can use RETURN or OUTPUT parameter.
Here is how to use the sp_executeSql
qid & accept id:
(12407247, 12407311)
query:
SQL stored procedure passing parameter into "order by"
soup:
Only by being slightly silly:
\nCREATE PROCEDURE [dbo].[TopVRM]\n@orderby varchar(255)\nAS\nSELECT Peroid1.Pareto FROM dbo.Peroid1\nGROUP by Pareto\nORDER by CASE WHEN @orderby='ASC' THEN Pareto END,\n CASE WHEN @orderby='DESC' THEN Pareto END DESC\n
\nYou don't strictly need to put the second sort condition in a CASE expression at all(*), and if Pareto is numeric, you may decide to just do CASE WHEN @orderby='ASC' THEN 1 ELSE -1 END * Pareto
\n(*) The second sort condition only has an effect when the first sort condition considers two rows to be equal. This is either when both rows have the same Pareto value (so the reverse sort would also consider them equal), of because the first CASE expression is returning NULLs (so @orderby isn't 'ASC', so we want to perform the DESC sort.
\n
\nYou might also want to consider retrieving both result sets in one go, rather than doing two calls:
\nCREATE PROCEDURE [dbo].[TopVRM]\n@orderby varchar(255)\nAS\n\nSELECT * FROM (\n SELECT\n *,\n ROW_NUMBER() OVER (ORDER BY Pareto) as rn1,\n ROW_NUMBER() OVER (ORDER BY Pareto DESC) as rn2\n FROM (\n SELECT Peroid1.Pareto\n FROM dbo.Peroid1\n GROUP by Pareto\n ) t\n) t2\nWHERE rn1 between 1 and 10 or rn2 between 1 and 10\nORDER BY rn1\n
\nThis will give you the top 10 and the bottom 10, in order from top to bottom. But if there are less than 20 results in total, you won't get duplicates, unlike your current plan.
\n
soup wrap:
Only by being slightly silly:
CREATE PROCEDURE [dbo].[TopVRM]
@orderby varchar(255)
AS
SELECT Peroid1.Pareto FROM dbo.Peroid1
GROUP by Pareto
ORDER by CASE WHEN @orderby='ASC' THEN Pareto END,
CASE WHEN @orderby='DESC' THEN Pareto END DESC
You don't strictly need to put the second sort condition in a CASE expression at all(*), and if Pareto is numeric, you may decide to just do CASE WHEN @orderby='ASC' THEN 1 ELSE -1 END * Pareto
(*) The second sort condition only has an effect when the first sort condition considers two rows to be equal. This is either when both rows have the same Pareto value (so the reverse sort would also consider them equal), of because the first CASE expression is returning NULLs (so @orderby isn't 'ASC', so we want to perform the DESC sort.
You might also want to consider retrieving both result sets in one go, rather than doing two calls:
CREATE PROCEDURE [dbo].[TopVRM]
@orderby varchar(255)
AS
SELECT * FROM (
SELECT
*,
ROW_NUMBER() OVER (ORDER BY Pareto) as rn1,
ROW_NUMBER() OVER (ORDER BY Pareto DESC) as rn2
FROM (
SELECT Peroid1.Pareto
FROM dbo.Peroid1
GROUP by Pareto
) t
) t2
WHERE rn1 between 1 and 10 or rn2 between 1 and 10
ORDER BY rn1
This will give you the top 10 and the bottom 10, in order from top to bottom. But if there are less than 20 results in total, you won't get duplicates, unlike your current plan.
qid & accept id:
(12419421, 12419497)
query:
[FIXED]From 2 mySQL databases, to one
soup:
--To get all the columns from locatie table
\nselect l.* from locatie l\njoin persooninfo p\non l.id=p.id_p\n
\n--To get all the columns from persooninfo table
\nselect l.* from locatie l\njoin persooninfo p\non l.id=p.id_p\n
\n----To get all the columns from persooninfo and locatie table
\nselect * from locatie l\njoin persooninfo p\non l.id=p.id_p\n
\n
soup wrap:
--To get all the columns from locatie table
select l.* from locatie l
join persooninfo p
on l.id=p.id_p
--To get all the columns from persooninfo table
select l.* from locatie l
join persooninfo p
on l.id=p.id_p
----To get all the columns from persooninfo and locatie table
select * from locatie l
join persooninfo p
on l.id=p.id_p
qid & accept id:
(12419854, 12420006)
query:
Dropping the same column name from mutiple tables in Oracle
soup:
No. An ALTER TABLE statement can not alter more than one table at a time. You could write some dynamic SQL based on ALL_TAB_COLS e.g.
\nSELECT 'ALTER TABLE ' || owner || '.' || table_name || ' DROP COLUMN '|| column_name || ';'\nFROM all_tab_columns\nWHERE column_name = 'MY_UNWANTED_COLUMN'\nAND owner = 'MY_OWNER'\n/\n
\nthen run that script. You might want to add
\nAND table_name IN ('MY_TAB1','MY_TAB2')\n
\nto specify an exact list of tables for extra piece of mind.
\n
soup wrap:
No. An ALTER TABLE statement can not alter more than one table at a time. You could write some dynamic SQL based on ALL_TAB_COLS e.g.
SELECT 'ALTER TABLE ' || owner || '.' || table_name || ' DROP COLUMN '|| column_name || ';'
FROM all_tab_columns
WHERE column_name = 'MY_UNWANTED_COLUMN'
AND owner = 'MY_OWNER'
/
then run that script. You might want to add
AND table_name IN ('MY_TAB1','MY_TAB2')
to specify an exact list of tables for extra piece of mind.
qid & accept id:
(12456897, 12457017)
query:
MySQL: same field value in multiple UNION
soup:
I think this is enough:
\nSELECT candidate_id \nFROM actions_log AS a\nWHERE job_id = 1858 \n AND ( action = 'a' \n OR action = 'b' \n AND EXISTS \n ( SELECT candidate_id \n FROM actions_log \n WHERE job_id = a.job_id\n AND action = 'c'\n )\n ) ;\n
\nor if you want to have the conditions separated, so you can build more complex queries easier:
\n SELECT candidate_id \n FROM actions_log AS a\n WHERE job_id = 1858 \n AND action = 'a' \nUNION DISTINCT\n SELECT b.candidate_id \n FROM actions_log AS b\n JOIN actions_log AS c\n ON c.candidate_id = b.candidate_id\n AND c.job_id = b.job_id\n WHERE b.job_id = 1858 \n AND b.action = 'b'\n AND c.action = 'c' ;\n
\n
soup wrap:
I think this is enough:
SELECT candidate_id
FROM actions_log AS a
WHERE job_id = 1858
AND ( action = 'a'
OR action = 'b'
AND EXISTS
( SELECT candidate_id
FROM actions_log
WHERE job_id = a.job_id
AND action = 'c'
)
) ;
or if you want to have the conditions separated, so you can build more complex queries easier:
SELECT candidate_id
FROM actions_log AS a
WHERE job_id = 1858
AND action = 'a'
UNION DISTINCT
SELECT b.candidate_id
FROM actions_log AS b
JOIN actions_log AS c
ON c.candidate_id = b.candidate_id
AND c.job_id = b.job_id
WHERE b.job_id = 1858
AND b.action = 'b'
AND c.action = 'c' ;
qid & accept id:
(12463628, 12464045)
query:
MySQL - Get a counter for each duplicate value
soup:
Unfortunately, MySQL does not have windowing functions which is what you will need. So you will have to use something like this:
\nFinal Query
\nselect data, group_row_number, overall_row_num\nfrom\n(\n select data,\n @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n @data := `data` as dummy, overall_row_num\n from\n (\n select data, @rn:=@rn+1 overall_row_num\n from yourtable, (SELECT @rn:=0) r\n ) x\n order by data, overall_row_num\n) x\norder by overall_row_num\n
\n\nExplanation:
\nFirst, inner select, this applies a mock row_number to all of the records in your table (See SQL Fiddle with Demo):
\nselect data, @rn:=@rn+1 overall_row_num\nfrom yourtable, (SELECT @rn:=0) r\n
\nSecond part of the query, compares each row in your table to the next one to see if it has the same value, if it doesn't then start the group_row_number over (see SQL Fiddle with Demo):
\nselect data,\n @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n @data := `data` as dummy, overall_row_num\nfrom\n(\n select data, @rn:=@rn+1 overall_row_num\n from yourtable, (SELECT @rn:=0) r\n) x\norder by data, overall_row_num\n
\nThe last select, returns the values you want and places them back in the order you requested:
\nselect data, group_row_number, overall_row_num\nfrom\n(\n select data,\n @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n @data := `data` as dummy, overall_row_num\n from\n (\n select data, @rn:=@rn+1 overall_row_num\n from yourtable, (SELECT @rn:=0) r\n ) x\n order by data, overall_row_num\n) x\norder by overall_row_num\n
\n
soup wrap:
Unfortunately, MySQL does not have windowing functions which is what you will need. So you will have to use something like this:
Final Query
select data, group_row_number, overall_row_num
from
(
select data,
@num := if(@data = `data`, @num + 1, 1) as group_row_number,
@data := `data` as dummy, overall_row_num
from
(
select data, @rn:=@rn+1 overall_row_num
from yourtable, (SELECT @rn:=0) r
) x
order by data, overall_row_num
) x
order by overall_row_num
Explanation:
First, inner select, this applies a mock row_number to all of the records in your table (See SQL Fiddle with Demo):
select data, @rn:=@rn+1 overall_row_num
from yourtable, (SELECT @rn:=0) r
Second part of the query, compares each row in your table to the next one to see if it has the same value, if it doesn't then start the group_row_number over (see SQL Fiddle with Demo):
select data,
@num := if(@data = `data`, @num + 1, 1) as group_row_number,
@data := `data` as dummy, overall_row_num
from
(
select data, @rn:=@rn+1 overall_row_num
from yourtable, (SELECT @rn:=0) r
) x
order by data, overall_row_num
The last select, returns the values you want and places them back in the order you requested:
select data, group_row_number, overall_row_num
from
(
select data,
@num := if(@data = `data`, @num + 1, 1) as group_row_number,
@data := `data` as dummy, overall_row_num
from
(
select data, @rn:=@rn+1 overall_row_num
from yourtable, (SELECT @rn:=0) r
) x
order by data, overall_row_num
) x
order by overall_row_num
qid & accept id:
(12498046, 12498385)
query:
SQL - get latest records from table where field is unique
soup:
See SQL Fiddle
\nSELECT T.*\nFROM T\nWHERE NOT EXISTS (\n SELECT * \n FROM T AS _T\n WHERE _T.conversation_id = T.conversation_id\n AND (\n _T.date_created > T.date_created\n OR\n _T.date_created = T.date_created AND _T.id > T.id) \n)\nORDER BY T.date_created DESC\n
\ngets
\nID STATUS CONVERSATION_ID MESSAGE_ID DATE_CREATED\n3 2 2 95 May, 05 2012 \n2 2 1 87 March, 03 2012 \n
\n
soup wrap:
See SQL Fiddle
SELECT T.*
FROM T
WHERE NOT EXISTS (
SELECT *
FROM T AS _T
WHERE _T.conversation_id = T.conversation_id
AND (
_T.date_created > T.date_created
OR
_T.date_created = T.date_created AND _T.id > T.id)
)
ORDER BY T.date_created DESC
gets
ID STATUS CONVERSATION_ID MESSAGE_ID DATE_CREATED
3 2 2 95 May, 05 2012
2 2 1 87 March, 03 2012
qid & accept id:
(12527563, 12527948)
query:
it is possible to "group by" without losing the original rows?
soup:
One obvious solution is storing intermediate results withing another 'temporary' table, and than perform aggregation in the second step.
\nAnother solution is preparing a lookup table containing sums you need (but there obviously needs to be some grouping ID, I call it MASTER_ID), like that:
\nCREATE TABLE comm_lkp AS\nSELECT MASTER_ID, SUM(commentsCount) as cnt\nFROM mycontents\nGROUP BY MASTER_ID\n
\nAlso create an index on that table on column MASTER_ID. Later, you can modify your query like that:
\nSELECT\n ...,\n commentsCount,\n cnt as commentsSum\nFROM\n mycontents as a\n JOIN comm_lkp as b ON (a.MASTER_ID=b.MASTER_ID)\nWHERE\n name LIKE "%mysql%"\n
\nIt also shouldn't touch your performance as long as lookup table will be relatively small.
\n
soup wrap:
One obvious solution is storing intermediate results withing another 'temporary' table, and than perform aggregation in the second step.
Another solution is preparing a lookup table containing sums you need (but there obviously needs to be some grouping ID, I call it MASTER_ID), like that:
CREATE TABLE comm_lkp AS
SELECT MASTER_ID, SUM(commentsCount) as cnt
FROM mycontents
GROUP BY MASTER_ID
Also create an index on that table on column MASTER_ID. Later, you can modify your query like that:
SELECT
...,
commentsCount,
cnt as commentsSum
FROM
mycontents as a
JOIN comm_lkp as b ON (a.MASTER_ID=b.MASTER_ID)
WHERE
name LIKE "%mysql%"
It also shouldn't touch your performance as long as lookup table will be relatively small.
qid & accept id:
(12530027, 12530093)
query:
Duplicate table and move it to different filegroup
soup:
You could change the default filegroup before the select into, and reset it after:
\nselect 41 as i into newtable1\nalter database test modify filegroup [secondary] default\nselect 41 as i into newtable2\nalter database test modify filegroup [primary] default\n\nselect t.name as TableName\n, f.name as Filegroup\nfrom sys.tables t\njoin sys.indexes i\non t.object_id = i.object_id\njoin sys.filegroups f\non f.data_space_id = i.data_space_id\nwhere t.name like 'newtable%'\n
\nThis prints:
\nTableName Filegroup\nnewtable1 PRIMARY\nnewtable2 SECONDARY\n
\n
soup wrap:
You could change the default filegroup before the select into, and reset it after:
select 41 as i into newtable1
alter database test modify filegroup [secondary] default
select 41 as i into newtable2
alter database test modify filegroup [primary] default
select t.name as TableName
, f.name as Filegroup
from sys.tables t
join sys.indexes i
on t.object_id = i.object_id
join sys.filegroups f
on f.data_space_id = i.data_space_id
where t.name like 'newtable%'
This prints:
TableName Filegroup
newtable1 PRIMARY
newtable2 SECONDARY
qid & accept id:
(12544051, 12545114)
query:
Randomly assign work location and each location should not exceed the number of designated employees
soup:
Maybe something like this:
\nselect C.* from \n(\n select *, ROW_NUMBER() OVER(PARTITION BY P.PlaceID, E.Designation ORDER BY NEWID()) AS RandPosition\n from Place as P cross join Employee E\n where P.PlaceName != E.Home AND P.PlaceName != E.CurrentPosting\n) as C\nwhere \n (C.Designation = 'Manager' AND C.RandPosition <= C.Manager) OR\n (C.Designation = 'PO' AND C.RandPosition <= C.PO) OR\n (C.Designation = 'Clerk' AND C.RandPosition <= C.Clerk)\n
\nThat should attempt to match employees randomly based on their designation discarding same currentPosting and home, and not assign more than what is specified in each column for the designation. However, this could return the same employee for several places, since they could match more than one based on that criteria.
\n
\nEDIT:\nAfter seeing your comment about not having a need for a high performing single query to solve this problem (which I'm not sure is even possible), and since it seems to be more of a "one-off" process that you will be calling, I wrote up the following code using a cursor and one temporary table to solve your problem of assignments:
\nselect *, null NewPlaceID into #Employee from Employee\n\ndeclare @empNo int\nDECLARE emp_cursor CURSOR FOR \nSELECT EmpNo from Employee order by newid()\n\nOPEN emp_cursor \nFETCH NEXT FROM emp_cursor INTO @empNo\n\nWHILE @@FETCH_STATUS = 0 \nBEGIN\n update #Employee \n set NewPlaceID = \n (\n select top 1 p.PlaceID from Place p \n where \n p.PlaceName != #Employee.Home AND \n p.PlaceName != #Employee.CurrentPosting AND\n (\n CASE #Employee.Designation \n WHEN 'Manager' THEN p.Manager\n WHEN 'PO' THEN p.PO\n WHEN 'Clerk' THEN p.Clerk\n END\n ) > (select count(*) from #Employee e2 where e2.NewPlaceID = p.PlaceID AND e2.Designation = #Employee.Designation)\n order by newid()\n ) \n where #Employee.EmpNo = @empNo\n FETCH NEXT FROM emp_cursor INTO @empNo \nEND\n\nCLOSE emp_cursor\nDEALLOCATE emp_cursor\n\nselect e.*, p.PlaceName as RandomPosting from Employee e\ninner join #Employee e2 on (e.EmpNo = e2.EmpNo)\ninner join Place p on (e2.NewPlaceID = p.PlaceID)\n\ndrop table #Employee\n
\nThe basic idea is, that it iterates over the employees, in random order, and assigns to each one a random Place that meets the criteria of different home and current posting, as well as controlling the amount that get assigned to each place for each Designation to ensure that the locations are not "over-assigned" for each role.
\nThis snippet doesn't actually alter your data though. The final SELECT statement just returns the proposed assignments. However you could very easily alter it to make actual changes to your Employee table accordingly.
\n
soup wrap:
Maybe something like this:
select C.* from
(
select *, ROW_NUMBER() OVER(PARTITION BY P.PlaceID, E.Designation ORDER BY NEWID()) AS RandPosition
from Place as P cross join Employee E
where P.PlaceName != E.Home AND P.PlaceName != E.CurrentPosting
) as C
where
(C.Designation = 'Manager' AND C.RandPosition <= C.Manager) OR
(C.Designation = 'PO' AND C.RandPosition <= C.PO) OR
(C.Designation = 'Clerk' AND C.RandPosition <= C.Clerk)
That should attempt to match employees randomly based on their designation discarding same currentPosting and home, and not assign more than what is specified in each column for the designation. However, this could return the same employee for several places, since they could match more than one based on that criteria.
EDIT:
After seeing your comment about not having a need for a high performing single query to solve this problem (which I'm not sure is even possible), and since it seems to be more of a "one-off" process that you will be calling, I wrote up the following code using a cursor and one temporary table to solve your problem of assignments:
select *, null NewPlaceID into #Employee from Employee
declare @empNo int
DECLARE emp_cursor CURSOR FOR
SELECT EmpNo from Employee order by newid()
OPEN emp_cursor
FETCH NEXT FROM emp_cursor INTO @empNo
WHILE @@FETCH_STATUS = 0
BEGIN
update #Employee
set NewPlaceID =
(
select top 1 p.PlaceID from Place p
where
p.PlaceName != #Employee.Home AND
p.PlaceName != #Employee.CurrentPosting AND
(
CASE #Employee.Designation
WHEN 'Manager' THEN p.Manager
WHEN 'PO' THEN p.PO
WHEN 'Clerk' THEN p.Clerk
END
) > (select count(*) from #Employee e2 where e2.NewPlaceID = p.PlaceID AND e2.Designation = #Employee.Designation)
order by newid()
)
where #Employee.EmpNo = @empNo
FETCH NEXT FROM emp_cursor INTO @empNo
END
CLOSE emp_cursor
DEALLOCATE emp_cursor
select e.*, p.PlaceName as RandomPosting from Employee e
inner join #Employee e2 on (e.EmpNo = e2.EmpNo)
inner join Place p on (e2.NewPlaceID = p.PlaceID)
drop table #Employee
The basic idea is, that it iterates over the employees, in random order, and assigns to each one a random Place that meets the criteria of different home and current posting, as well as controlling the amount that get assigned to each place for each Designation to ensure that the locations are not "over-assigned" for each role.
This snippet doesn't actually alter your data though. The final SELECT statement just returns the proposed assignments. However you could very easily alter it to make actual changes to your Employee table accordingly.
qid & accept id:
(12579635, 12579757)
query:
MySQL: Migrating data into a many to many relationship from an OldDB plain table
soup:
try this:
\nINSERT NewDB.center_has_b (center_id, b_id)\n select 'N', oldb_id from OldDB.oldb WHERE centerN = 1\n
\nEDIT: This is based on the first comment for this answer
\ninsert into center_has_b (center_id,b_id)\nselect c.enter_id ,old.b_id\nfrom centers c\ncross join old.b\nwhere Allcenters = 'Y'\n
\n
soup wrap:
try this:
INSERT NewDB.center_has_b (center_id, b_id)
select 'N', oldb_id from OldDB.oldb WHERE centerN = 1
EDIT: This is based on the first comment for this answer
insert into center_has_b (center_id,b_id)
select c.enter_id ,old.b_id
from centers c
cross join old.b
where Allcenters = 'Y'
qid & accept id:
(12590682, 12590748)
query:
MySQL database design: User and event table
soup:
Yes, you will want to create a JOIN table for the users and the events. Similar to this:
\ncreate table users\n(\n id int,\n name varchar(10) -- add other fields as needed\n);\n\ncreate table events\n(\n id int,\n name varchar(10),\n e_owner_id int, -- userId of who created the event\n e_date datetime -- add other fields as needed\n);\n\ncreate table users_events -- when user wants to attend a record will be added to this table\n(\n u_id int,\n e_id int\n);\n
\nThen to query, you would use something like this:
\nselect *\nfrom users u\nleft join users_events ue\n on u.id = ue.u_id\nleft join events e\n on ue.e_id = e.id;\n
\n
soup wrap:
Yes, you will want to create a JOIN table for the users and the events. Similar to this:
create table users
(
id int,
name varchar(10) -- add other fields as needed
);
create table events
(
id int,
name varchar(10),
e_owner_id int, -- userId of who created the event
e_date datetime -- add other fields as needed
);
create table users_events -- when user wants to attend a record will be added to this table
(
u_id int,
e_id int
);
Then to query, you would use something like this:
select *
from users u
left join users_events ue
on u.id = ue.u_id
left join events e
on ue.e_id = e.id;
qid & accept id:
(12593776, 12593873)
query:
Oracle SQL: Joining another table with one missing tuple
soup:
select *\n from order_information oi\n left join mass_decode md \n on (\n oi.color_cd = md.cd \n and oi.key = md.key\n )\nwhere oi.key = 'KEY_A';\n
\n\nupd:
\nAccording to your updates:
\nselect *\n from order_information oi\n left join mass_decode md \n on oi.color_cd = md.cd\nwhere md.key = 'COLOR_CD' or md.key is null;\n
\n\n
soup wrap:
select *
from order_information oi
left join mass_decode md
on (
oi.color_cd = md.cd
and oi.key = md.key
)
where oi.key = 'KEY_A';
upd:
According to your updates:
select *
from order_information oi
left join mass_decode md
on oi.color_cd = md.cd
where md.key = 'COLOR_CD' or md.key is null;
qid & accept id:
(12698945, 12698989)
query:
sql oracle duplicates
soup:
There are several ways to do this - see SQL Fiddle with Demo of all queries
\nYou can use a subquery:
\nselect t1.asset_no,\n t1.sub,\n t1.add_dtm\nfrom table1 t1\ninner join\n(\n select max(add_dtm) mxdate, asset_no\n from table1\n group by asset_no\n) t2\n on t1.add_dtm = t2.mxdate\n and t1.asset_no = t2.asset_no\n
\nor you can use CTE using row_number():
\nwith cte as\n(\n select asset_no,\n sub,\n add_dtm,\n row_number() over(partition by asset_no \n order by add_dtm desc) rn\n from table1\n) \nselect *\nfrom cte\nwhere rn = 1\n
\nOr without CTE using row_number():
\nselect *\nfrom \n(\n select asset_no,\n sub,\n add_dtm,\n row_number() over(partition by asset_no \n order by add_dtm desc) rn\n from table1\n) x\nwhere rn = 1\n
\n
soup wrap:
There are several ways to do this - see SQL Fiddle with Demo of all queries
You can use a subquery:
select t1.asset_no,
t1.sub,
t1.add_dtm
from table1 t1
inner join
(
select max(add_dtm) mxdate, asset_no
from table1
group by asset_no
) t2
on t1.add_dtm = t2.mxdate
and t1.asset_no = t2.asset_no
or you can use CTE using row_number():
with cte as
(
select asset_no,
sub,
add_dtm,
row_number() over(partition by asset_no
order by add_dtm desc) rn
from table1
)
select *
from cte
where rn = 1
Or without CTE using row_number():
select *
from
(
select asset_no,
sub,
add_dtm,
row_number() over(partition by asset_no
order by add_dtm desc) rn
from table1
) x
where rn = 1
qid & accept id:
(12712480, 12712774)
query:
SQL query to test if string value contains carriage return
soup:
To find a value that contains non-printable characters such as carriage return or vertical tab or end of line you can use regexp_like function. In your case to display rows where a string value of a particular column contains carriage return at the end the similar query can be used.
\nselect *\n from your_table_name\n where regexp_like(trim(string_column), '[[:space:]]$')\n
\n\n
\nAnswer to the comments
\nTrim function, by default, deletes leading and trailing spaces and it will not delete carriage return or end of line characters. Lets carry out a simple test:
\nSQL> create table Test_Table(\n 2 id number,\n 3 col1 varchar2(101)\n 4 );\n\nTable created\n\nSQL> insert into Test_Table (id, col1)\n 2 values(1, 'Simple string');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n 2 values(1, 'Simple string with carriage return at the end' || chr(13));\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n 2 values(1, ' Simple string with carriage return at the end leading and trailing spaces' || chr(13)||' ');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n 2 values(1, ' Simple string leading and trailing spaces ');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> select *\n 2 from test_table;\n\n ID COL1\n--------------------------------------------------------------------------------\n 1 Simple string\n 1 Simple string with carriage return at the end\n 1 Simple string with carriage return at the end leading and trailing spaces\n 1 Simple string leading and trailing spaces\n\nSQL> \nSQL> select *\n 2 from test_table\n 3 where regexp_like(trim(col1), '[[:space:]]$')\n 4 ;\n\n ID COL1\n----------------------------------------------------------------------------------\n 1 Simple string with carriage return at the end\n 1 Simple string with carriage return at the end leading and trailing spaces\n\nSQL> \n
\n
soup wrap:
To find a value that contains non-printable characters such as carriage return or vertical tab or end of line you can use regexp_like function. In your case to display rows where a string value of a particular column contains carriage return at the end the similar query can be used.
select *
from your_table_name
where regexp_like(trim(string_column), '[[:space:]]$')
Answer to the comments
Trim function, by default, deletes leading and trailing spaces and it will not delete carriage return or end of line characters. Lets carry out a simple test:
SQL> create table Test_Table(
2 id number,
3 col1 varchar2(101)
4 );
Table created
SQL> insert into Test_Table (id, col1)
2 values(1, 'Simple string');
1 row inserted
SQL> commit;
Commit complete
SQL> insert into Test_Table (id, col1)
2 values(1, 'Simple string with carriage return at the end' || chr(13));
1 row inserted
SQL> commit;
Commit complete
SQL> insert into Test_Table (id, col1)
2 values(1, ' Simple string with carriage return at the end leading and trailing spaces' || chr(13)||' ');
1 row inserted
SQL> commit;
Commit complete
SQL> insert into Test_Table (id, col1)
2 values(1, ' Simple string leading and trailing spaces ');
1 row inserted
SQL> commit;
Commit complete
SQL> select *
2 from test_table;
ID COL1
--------------------------------------------------------------------------------
1 Simple string
1 Simple string with carriage return at the end
1 Simple string with carriage return at the end leading and trailing spaces
1 Simple string leading and trailing spaces
SQL>
SQL> select *
2 from test_table
3 where regexp_like(trim(col1), '[[:space:]]$')
4 ;
ID COL1
----------------------------------------------------------------------------------
1 Simple string with carriage return at the end
1 Simple string with carriage return at the end leading and trailing spaces
SQL>
qid & accept id:
(12713468, 12713578)
query:
Can SQL determine which values from a set of possible column values do not exist?
soup:
Stick the allowed values in a temporary table allowed, then use a subquery using NOT IN:
\nSELECT *\nFROM allowed\nWHERE allowed.val NOT IN (\n SELECT maintable.val\n)\n
\nSome DBs will allow you to build up a table "in-place", instead of having to create a separate table. E.g. in PostgreSQL (any version):
\nSELECT *\nFROM (\n SELECT 'foo'\n UNION ALL SELECT 'bar'\n UNION ALL SELECT 'baz' -- etc.\n) inplace_allowed\nWHERE inplace_allowed.val NOT IN (\n SELECT maintable.val\n)\n
\nMore modern versions of PostgreSQL (and perhaps other DBs) will let you use the slightly nicer VALUES syntax to do the same thing.
\n
soup wrap:
Stick the allowed values in a temporary table allowed, then use a subquery using NOT IN:
SELECT *
FROM allowed
WHERE allowed.val NOT IN (
SELECT maintable.val
)
Some DBs will allow you to build up a table "in-place", instead of having to create a separate table. E.g. in PostgreSQL (any version):
SELECT *
FROM (
SELECT 'foo'
UNION ALL SELECT 'bar'
UNION ALL SELECT 'baz' -- etc.
) inplace_allowed
WHERE inplace_allowed.val NOT IN (
SELECT maintable.val
)
More modern versions of PostgreSQL (and perhaps other DBs) will let you use the slightly nicer VALUES syntax to do the same thing.
qid & accept id:
(12730070, 12730361)
query:
I need a way to use column values as column names in MySQL
soup:
You are trying to PIVOT the data but MySQL does not have a PIVOT function. Also to make this easier, you will want to partition the data based on the degerAdi value to apply a rownumber. If you have a known number of columns, then you can use:
\nselect rn,\n max(case when DEGERADI = 'asd' then DEGER end) asd,\n max(case when DEGERADI = 'rty' then DEGER end) rty,\n max(case when DEGERADI = 'hhh' then DEGER end) hhh,\n max(case when DEGERADI = 'hjh' then DEGER end) hjh,\n max(case when DEGERADI = 'ffgu' then DEGER end) ffgu,\n max(case when DEGERADI = 'qwe' then DEGER end) qwe\nfrom\n(\n select id, degerAdi, deger,\n @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,\n @degerAdi := `degerAdi` as dummy\n from table1\n) x\ngroup by rn;\n
\n\nIf you have an unknown number of columns then you will want to use prepared statements:
\nSET @sql = NULL;\nSELECT\n GROUP_CONCAT(DISTINCT\n CONCAT(\n 'max(case when degerAdi = ''',\n degerAdi,\n ''' then deger end) AS ',\n degerAdi\n )\n ) INTO @sql\nFROM Table1;\n\nSET @sql \n = CONCAT('SELECT rn, ', @sql, ' \n from\n (\n select id, degerAdi, deger,\n @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,\n @degerAdi := `degerAdi` as dummy\n from table1\n ) x\n group by rn');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
\n\n
soup wrap:
You are trying to PIVOT the data but MySQL does not have a PIVOT function. Also to make this easier, you will want to partition the data based on the degerAdi value to apply a rownumber. If you have a known number of columns, then you can use:
select rn,
max(case when DEGERADI = 'asd' then DEGER end) asd,
max(case when DEGERADI = 'rty' then DEGER end) rty,
max(case when DEGERADI = 'hhh' then DEGER end) hhh,
max(case when DEGERADI = 'hjh' then DEGER end) hjh,
max(case when DEGERADI = 'ffgu' then DEGER end) ffgu,
max(case when DEGERADI = 'qwe' then DEGER end) qwe
from
(
select id, degerAdi, deger,
@num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,
@degerAdi := `degerAdi` as dummy
from table1
) x
group by rn;
If you have an unknown number of columns then you will want to use prepared statements:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'max(case when degerAdi = ''',
degerAdi,
''' then deger end) AS ',
degerAdi
)
) INTO @sql
FROM Table1;
SET @sql
= CONCAT('SELECT rn, ', @sql, '
from
(
select id, degerAdi, deger,
@num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,
@degerAdi := `degerAdi` as dummy
from table1
) x
group by rn');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
qid & accept id:
(12773500, 12783577)
query:
SQLite count ocurrences in row
soup:
As mentioned the SQL appears fine. I ran a quick test here with the following:
\ncreate table #temp\n(num int)\n\ninsert #temp\nselect 1 union all\nselect 1 union all\nselect 1 union all\nselect 2 union all\nselect 3 \n\nselect Num, COUNT(num) as Occurances from #temp group by num\n\ndrop table #temp\n
\nThis gives the below result set:
\nNum Occurances\n1 3\n2 1\n3 1\n
\nCompare the above to your whole code, including the table creation etc.
\n
soup wrap:
As mentioned the SQL appears fine. I ran a quick test here with the following:
create table #temp
(num int)
insert #temp
select 1 union all
select 1 union all
select 1 union all
select 2 union all
select 3
select Num, COUNT(num) as Occurances from #temp group by num
drop table #temp
This gives the below result set:
Num Occurances
1 3
2 1
3 1
Compare the above to your whole code, including the table creation etc.
qid & accept id:
(12783579, 12785937)
query:
Read file with multiple empty lines from ORACLE DB with BASH
soup:
Assuming the data is loaded into the CLOB with the line breaks as CHR(13)||CHR(10), and you can see it in the expected format if you just select directly from the table, then the problem is with how SQL*Plus is interacting with DBMS_OUTPUT.
\nBy default, SET SERVEROUTPUT ON sets the FORMAT to WORD_WRAPPED. The documentation says 'SQL*Plus left justifies each line, skipping all leading whitespace', but doesn't note that this also skips all blank lines.
\nIf you set SERVEROUTPUT ON FORMAT WRAPPED or ... TRUNCATED then your blank lines will reappear. But you need to make sure your linesize is wide enough for the longest possible line you want to print, particularly if you go with TRUNCATED.
\n(Also, your code is not declaring l_pos NUMBER := 1, and is missing a final DBMS_OUTPUT.NEW_LINE so you'll lose the final line from the CLOB).
\n
\nTo demonstrate, if I create a dummy table with just a CLOB column, and populate it with a value that has the carriage return/linefeed you're looking for:
\ncreate table t42(text clob);\n\ninsert into t42 values ('Hello Mr. X' || CHR(13) || CHR(10)\n || CHR(13) || CHR(10)\n || 'Text from Mailboddy' || CHR(13) || CHR(10)\n || CHR(13) || CHR(10)\n || 'Greetins' || CHR(13) || CHR(10)\n || 'Mr. Y');\n\nselect * from t42;\n
\nI get:
\nTEXT\n--------------------------------------------------------------------------------\nHello Mr. X\n\nText from Mailboddy\n\nGreetins\nMr. Y\n
\nUsing your procedure (very slightly modified so it will run):
\nsqlplus -s $DBLOGIN < file\nSET FEEDBACK OFF;\nSET SERVEROUTPUT ON FORMAT WORD_WRAPPED; -- setting this explicitly for effect\nDECLARE\n l_text CLOB;\n l_pos number := 1; -- added this\nBEGIN\n SELECT text\n INTO l_text\n FROM t42;\n while dbms_lob.substr(l_text, 1, l_pos) is not null LOOP\n if dbms_lob.substr(l_text, 2, l_pos) = CHR(13) || CHR(10) then\n DBMS_OUTPUT.NEW_LINE;\n l_pos:=l_pos + 1;\n else\n DBMS_OUTPUT.put(dbms_lob.substr(l_text, 1, l_pos));\n end if;\n l_pos:=l_pos + 1;\n END LOOP;\n dbms_output.new_line; -- added this\nEND;\n/\n\nENDE_SQL\n
\nfile contains:
\nHello Mr. X\nText from Mailboddy\nGreetins\nMr. Y\n
\nIf I only change one line in your code, to:
\nSET SERVEROUTPUT ON FORMAT WRAPPED;\n
\nthen file now contains:
\nHello Mr. X\n\nText from Mailboddy\n\nGreetins\nMr. Y\n
\n
\nYou might want to consider UTL_FILE for this, rather than DBMS_OUTPUT, depending on your configuration. Something like this might give you some pointers.
\n
soup wrap:
Assuming the data is loaded into the CLOB with the line breaks as CHR(13)||CHR(10), and you can see it in the expected format if you just select directly from the table, then the problem is with how SQL*Plus is interacting with DBMS_OUTPUT.
By default, SET SERVEROUTPUT ON sets the FORMAT to WORD_WRAPPED. The documentation says 'SQL*Plus left justifies each line, skipping all leading whitespace', but doesn't note that this also skips all blank lines.
If you set SERVEROUTPUT ON FORMAT WRAPPED or ... TRUNCATED then your blank lines will reappear. But you need to make sure your linesize is wide enough for the longest possible line you want to print, particularly if you go with TRUNCATED.
(Also, your code is not declaring l_pos NUMBER := 1, and is missing a final DBMS_OUTPUT.NEW_LINE so you'll lose the final line from the CLOB).
To demonstrate, if I create a dummy table with just a CLOB column, and populate it with a value that has the carriage return/linefeed you're looking for:
create table t42(text clob);
insert into t42 values ('Hello Mr. X' || CHR(13) || CHR(10)
|| CHR(13) || CHR(10)
|| 'Text from Mailboddy' || CHR(13) || CHR(10)
|| CHR(13) || CHR(10)
|| 'Greetins' || CHR(13) || CHR(10)
|| 'Mr. Y');
select * from t42;
I get:
TEXT
--------------------------------------------------------------------------------
Hello Mr. X
Text from Mailboddy
Greetins
Mr. Y
Using your procedure (very slightly modified so it will run):
sqlplus -s $DBLOGIN < file
SET FEEDBACK OFF;
SET SERVEROUTPUT ON FORMAT WORD_WRAPPED; -- setting this explicitly for effect
DECLARE
l_text CLOB;
l_pos number := 1; -- added this
BEGIN
SELECT text
INTO l_text
FROM t42;
while dbms_lob.substr(l_text, 1, l_pos) is not null LOOP
if dbms_lob.substr(l_text, 2, l_pos) = CHR(13) || CHR(10) then
DBMS_OUTPUT.NEW_LINE;
l_pos:=l_pos + 1;
else
DBMS_OUTPUT.put(dbms_lob.substr(l_text, 1, l_pos));
end if;
l_pos:=l_pos + 1;
END LOOP;
dbms_output.new_line; -- added this
END;
/
ENDE_SQL
file contains:
Hello Mr. X
Text from Mailboddy
Greetins
Mr. Y
If I only change one line in your code, to:
SET SERVEROUTPUT ON FORMAT WRAPPED;
then file now contains:
Hello Mr. X
Text from Mailboddy
Greetins
Mr. Y
You might want to consider UTL_FILE for this, rather than DBMS_OUTPUT, depending on your configuration. Something like this might give you some pointers.
qid & accept id:
(12815194, 12815234)
query:
Selecting an additional empty row that does not exist
soup:
Try to use union all
\nSELECT null as PROFILETITLE, null as DOCID \nUNION ALL\nSELECT PROFILETITLE, DOCID \nFROM PROFILES\nWHERE COMPANYCODE=? \nORDER BY PROFILETITLE\n
\nbut if you wont to add header, and if DOCID is int type, you have to use union all and cast as below
\nSELECT 'PROFILETITLE' as PROFILETITLE, 'DOCID' as DOCID \nUNION ALL\nSELECT PROFILETITLE, CAST ( DOCID AS varchar(30) )\nFROM PROFILES\nWHERE COMPANYCODE=? \nORDER BY PROFILETITLE\n
\n
soup wrap:
Try to use union all
SELECT null as PROFILETITLE, null as DOCID
UNION ALL
SELECT PROFILETITLE, DOCID
FROM PROFILES
WHERE COMPANYCODE=?
ORDER BY PROFILETITLE
but if you wont to add header, and if DOCID is int type, you have to use union all and cast as below
SELECT 'PROFILETITLE' as PROFILETITLE, 'DOCID' as DOCID
UNION ALL
SELECT PROFILETITLE, CAST ( DOCID AS varchar(30) )
FROM PROFILES
WHERE COMPANYCODE=?
ORDER BY PROFILETITLE
qid & accept id:
(12818621, 12822738)
query:
Postgresql. Create array inside select query
soup:
Assuming your starting table is named plop
\nSELECT\n plop.id,\n CASE\n WHEN plop.type = 1 THEN (SELECT array_agg(plop.entry * plop.size * val.x) FROM (VALUES (0.5), (0.3), (0.2)) val (x))::int4[]\n WHEN plop.type = 2 THEN (SELECT array_agg(3 * plop.entry * x/x ) FROM generate_series(1, plop.size / 3) x)::int4[]\n ELSE ARRAY[plop.entry * plop.size]::int4[]\n END AS prize_pool\nFROM plop\n;\n
\nThat returns:
\n┌────┬──────────────────┐ \n│ id │ prize_pool │ \n├────┼──────────────────┤ \n│ 1 │ {100} │ \n│ 2 │ {200} │ \n│ 3 │ {150,90,60} │ \n│ 4 │ {90,90,90,90,90} │ \n└────┴──────────────────┘\n
\nBecause entry x size / ( size / 3 ) = 3 x entry
\nNote the x/x is always equal to 1 and is needed to indicate to Postgres on which set it must aggregate the results as an array.
\nHope it helps.
\n
soup wrap:
Assuming your starting table is named plop
SELECT
plop.id,
CASE
WHEN plop.type = 1 THEN (SELECT array_agg(plop.entry * plop.size * val.x) FROM (VALUES (0.5), (0.3), (0.2)) val (x))::int4[]
WHEN plop.type = 2 THEN (SELECT array_agg(3 * plop.entry * x/x ) FROM generate_series(1, plop.size / 3) x)::int4[]
ELSE ARRAY[plop.entry * plop.size]::int4[]
END AS prize_pool
FROM plop
;
That returns:
┌────┬──────────────────┐
│ id │ prize_pool │
├────┼──────────────────┤
│ 1 │ {100} │
│ 2 │ {200} │
│ 3 │ {150,90,60} │
│ 4 │ {90,90,90,90,90} │
└────┴──────────────────┘
Because entry x size / ( size / 3 ) = 3 x entry
Note the x/x is always equal to 1 and is needed to indicate to Postgres on which set it must aggregate the results as an array.
Hope it helps.
qid & accept id:
(12823575, 12824065)
query:
How do I find pairs that share the one property (column) through multiple tuples (rows)?
soup:
If you can accept CSV instead of tabulated results, you could simply group the table twice:
\nSELECT GROUP_CONCAT(User) FROM (\n SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n FROM Shows\n GROUP BY User\n) t GROUP BY s\n
\nOtherwise, you can join the above subquery to itself:
\nSELECT DISTINCT LEAST(t.User, u.User) AS User1,\n GREATEST(t.User, u.User) AS User2\nFROM (\n SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n FROM Shows\n GROUP BY User\n) t JOIN (\n SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n FROM Shows\n GROUP BY User\n) u USING (s)\nWHERE t.User <> u.User\n
\nSee them on sqlfiddle.
\nOf course, if duplicate (User, Show) pairs are guaranteed not to exist in the Shows table, you could improve performance by removing the DISTINCT keyword from the GROUP_CONCAT() aggregations.
\n
soup wrap:
If you can accept CSV instead of tabulated results, you could simply group the table twice:
SELECT GROUP_CONCAT(User) FROM (
SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
FROM Shows
GROUP BY User
) t GROUP BY s
Otherwise, you can join the above subquery to itself:
SELECT DISTINCT LEAST(t.User, u.User) AS User1,
GREATEST(t.User, u.User) AS User2
FROM (
SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
FROM Shows
GROUP BY User
) t JOIN (
SELECT User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
FROM Shows
GROUP BY User
) u USING (s)
WHERE t.User <> u.User
See them on sqlfiddle.
Of course, if duplicate (User, Show) pairs are guaranteed not to exist in the Shows table, you could improve performance by removing the DISTINCT keyword from the GROUP_CONCAT() aggregations.
qid & accept id:
(12839031, 12840959)
query:
Sybase convert float to string
soup:
I'm not sure if there is easier way to do that on sybase.
\nThis example works for me
\ndeclare @val float\ndeclare @val2 float\nselect @val = 17.666655942234 \nselect @val2 = 17.66\nselect substring(convert(varchar(30),@val), 1, patindex('%.%',convert(varchar(30),@val)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val), patindex('%.%',convert(varchar(30),@val))+1,6))))) as Val,\n substring(convert(varchar(30),@val2), 1, patindex('%.%',convert(varchar(30),@val2)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val2), patindex('%.%',convert(varchar(30),@val2))+1,6))))) as Val2\n
\nsolution with varchar(15)
\ndeclare @val numeric(10,5)\ndeclare @val2 numeric(10,5)\nselect @val = convert(numeric(10,5),17.666655942234)\nselect @val2 = convert(numeric(10,5),17.66)\nselect convert(varchar(15),substring(convert(varchar(15),@val), 1, patindex('%.%',convert(varchar(15),@val)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val), patindex('%.%',convert(varchar(15),@val))+1,6)))))) as Val,\n convert(varchar(15),substring(convert(varchar(15),@val2), 1, patindex('%.%',convert(varchar(15),@val2)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val2), patindex('%.%',convert(varchar(15),@val2))+1,6)))))) as Val2\n
\n
soup wrap:
I'm not sure if there is easier way to do that on sybase.
This example works for me
declare @val float
declare @val2 float
select @val = 17.666655942234
select @val2 = 17.66
select substring(convert(varchar(30),@val), 1, patindex('%.%',convert(varchar(30),@val)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val), patindex('%.%',convert(varchar(30),@val))+1,6))))) as Val,
substring(convert(varchar(30),@val2), 1, patindex('%.%',convert(varchar(30),@val2)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val2), patindex('%.%',convert(varchar(30),@val2))+1,6))))) as Val2
solution with varchar(15)
declare @val numeric(10,5)
declare @val2 numeric(10,5)
select @val = convert(numeric(10,5),17.666655942234)
select @val2 = convert(numeric(10,5),17.66)
select convert(varchar(15),substring(convert(varchar(15),@val), 1, patindex('%.%',convert(varchar(15),@val)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val), patindex('%.%',convert(varchar(15),@val))+1,6)))))) as Val,
convert(varchar(15),substring(convert(varchar(15),@val2), 1, patindex('%.%',convert(varchar(15),@val2)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val2), patindex('%.%',convert(varchar(15),@val2))+1,6)))))) as Val2
qid & accept id:
(12849213, 12849254)
query:
MySQL query to return total Profit/Loss for a list of dates
soup:
Assuming that Date is stored as you show on the expected result this should work:
\nSELECT\n SUM(Amount) AS "Profit/Loss",\n Date\nFROM your_table\nGROUP BY(Date)\n
\nOtherwise id Date is of type DATE, DATETIME or TIMESTAMP you could do something like this:
\nSELECT\n SUM(Amount) AS "Profit/Loss",\n DATE_FORMAT(Date, '%d-%m-%y') AS Date\nFROM your_table\nGROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))\n
\nreferences:
\n\n- DATE_FORMAT
\n- GROUP BY
\n
\nEDIT (after OP's comment)
\nto achieve the comulative SUM here is a good hint:
\nSET @csum := 0;\nSELECT\n (@csum := @csum + x.ProfitLoss) as ProfitLoss,\n x.Date\nFROM\n(\n SELECT\n SUM(Amount) AS ProfitLoss,\n DATE_FORMAT(Date, '%d-%m-%y') AS Date\n FROM your_table\n GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))\n) x\norder by x.Date;\n
\nessentialy you store the current sum into a variable (@csum) and for each row of the grouped transactions you increase it by the daily balance
\n
soup wrap:
Assuming that Date is stored as you show on the expected result this should work:
SELECT
SUM(Amount) AS "Profit/Loss",
Date
FROM your_table
GROUP BY(Date)
Otherwise id Date is of type DATE, DATETIME or TIMESTAMP you could do something like this:
SELECT
SUM(Amount) AS "Profit/Loss",
DATE_FORMAT(Date, '%d-%m-%y') AS Date
FROM your_table
GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))
references:
EDIT (after OP's comment)
to achieve the comulative SUM here is a good hint:
SET @csum := 0;
SELECT
(@csum := @csum + x.ProfitLoss) as ProfitLoss,
x.Date
FROM
(
SELECT
SUM(Amount) AS ProfitLoss,
DATE_FORMAT(Date, '%d-%m-%y') AS Date
FROM your_table
GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))
) x
order by x.Date;
essentialy you store the current sum into a variable (@csum) and for each row of the grouped transactions you increase it by the daily balance
qid & accept id:
(12870094, 12870123)
query:
How can I group by on a field which has NULL values?
soup:
From Aggregate Functions in SQLite
\n\nThe count(X) function returns a count of the number of times that X is not NULL in a group. The count(*) function (with no arguments) returns the total number of rows in the group.
\n
\nSo, the COUNT function does not count NULL so use COUNT(*) instead of COUNT(y).
\nSELECT y, COUNT(*) AS COUNT\nFROM mytable\nGROUP BY y\n
\nOr you can also use COUNT(x) like this one.
\nSELECT y, COUNT(x) AS COUNT\nFROM mytable\nGROUP BY y\n
\nSee this SQLFiddle
\n
soup wrap:
From Aggregate Functions in SQLite
The count(X) function returns a count of the number of times that X is not NULL in a group. The count(*) function (with no arguments) returns the total number of rows in the group.
So, the COUNT function does not count NULL so use COUNT(*) instead of COUNT(y).
SELECT y, COUNT(*) AS COUNT
FROM mytable
GROUP BY y
Or you can also use COUNT(x) like this one.
SELECT y, COUNT(x) AS COUNT
FROM mytable
GROUP BY y
See this SQLFiddle
qid & accept id:
(12875040, 12877084)
query:
Find similar objects that share the most tags
soup:
Given one object, you can find its tags like this:
\n SELECT t1.id\n FROM tags t1\n where t1.parent_id = ?\n
\nBuilding on that, you want to take that list of tags and find other parent_ids that share them.
\n SELECT parent_id, count(*)\n FROM tags t2\n WHERE EXISTS (\n SELECT t1.id\n FROM tags t1\n WHERE t1.parent_id = ?\n AND t1.id = t2.id\n )\n GROUP BY parent_id\n
\nThat will give you a count of how many tags those other parent_ids share.
\nYou can ORDER BY count(*) desc if you'd like to find the "most similar" rows first.
\nHope that helps.
\n
soup wrap:
Given one object, you can find its tags like this:
SELECT t1.id
FROM tags t1
where t1.parent_id = ?
Building on that, you want to take that list of tags and find other parent_ids that share them.
SELECT parent_id, count(*)
FROM tags t2
WHERE EXISTS (
SELECT t1.id
FROM tags t1
WHERE t1.parent_id = ?
AND t1.id = t2.id
)
GROUP BY parent_id
That will give you a count of how many tags those other parent_ids share.
You can ORDER BY count(*) desc if you'd like to find the "most similar" rows first.
Hope that helps.
qid & accept id:
(12879550, 12879631)
query:
How to select row with max value when duplicate rows exist in SQL Server
soup:
You're basically just missing a status comparison since you want one row per status;
\nSELECT *\nFROM WF_Approval sr1\nWHERE NOT EXISTS (\n SELECT *\n FROM WF_Approval sr2 \n WHERE sr1.DocumentID = sr2.DocumentID AND \n sr1.Status = sr2.Status AND # <-- new line\n sr1.StepNumber < sr2.StepNumber\n) AND MasterStepID = 'Approval1'\n
\nor rewritten as a JOIN;
\nSELECT *\nFROM WF_Approval sr1\nLEFT JOIN WF_Approval sr2\n ON sr1.DocumentID = sr2.DocumentID \n AND sr1.Status = sr2.Status\n AND sr1.StepNumber < sr2.StepNumber\nWHERE sr2.DocumentID IS NULL\n AND sr1.MasterStepID = 'Approval1';\n
\nSQLfiddle with both versions of the query here.
\n
soup wrap:
You're basically just missing a status comparison since you want one row per status;
SELECT *
FROM WF_Approval sr1
WHERE NOT EXISTS (
SELECT *
FROM WF_Approval sr2
WHERE sr1.DocumentID = sr2.DocumentID AND
sr1.Status = sr2.Status AND # <-- new line
sr1.StepNumber < sr2.StepNumber
) AND MasterStepID = 'Approval1'
or rewritten as a JOIN;
SELECT *
FROM WF_Approval sr1
LEFT JOIN WF_Approval sr2
ON sr1.DocumentID = sr2.DocumentID
AND sr1.Status = sr2.Status
AND sr1.StepNumber < sr2.StepNumber
WHERE sr2.DocumentID IS NULL
AND sr1.MasterStepID = 'Approval1';
SQLfiddle with both versions of the query here.
qid & accept id:
(12899727, 12899749)
query:
SQL - Check if all the columns in one table also exist in another
soup:
select X\nfrom A\nLEFT OUTER JOIN B on A.x = B.X\nWHERE B.X IS NULL\n
\nto get all records from table A that are not in table B. Or
\nselect X\nfrom B\nLEFT OUTER JOIN A on A.x = B.X\nWHERE A.X IS NULL\n
\nto get all records from table B that are not in table A.
\n
soup wrap:
select X
from A
LEFT OUTER JOIN B on A.x = B.X
WHERE B.X IS NULL
to get all records from table A that are not in table B. Or
select X
from B
LEFT OUTER JOIN A on A.x = B.X
WHERE A.X IS NULL
to get all records from table B that are not in table A.
qid & accept id:
(12951673, 12952233)
query:
Oracle Cast using %TYPE attribute
soup:
%TYPE is only available in PL/SQL, and can only be used in the declaration section of a block. So, you can't do what you're attempting.
\nYou might think you could declare your own PL/SQL (sub)type and use that in the statement:
\ndeclare\n subtype my_type is t1.v%type;\nbegin\n insert into t1 select cast(v as my_type) from t2;\nend;\n/\n
\n... but that also won't work, because cast() is an SQL function not a PL/SQL one, and only recognises built-in and schema-level collection types; and you can't create an SQL type using the %TYPE either.
\n
\nAs a nasty hack, you could do something like:
\ninsert into t1 select substr(v, 1,\n select data_length\n from user_tab_columns\n where table_name = 'T1'\n and column_name = 'V') from t2;\n
\nWhich would be slightly more palatable if you could have that length stored in a variable - a substitution or bind variable in SQL*Plus, or a local variable in PL/SQL. For example, if it's a straight SQL update through SQL*Plus you could use a bind variable:
\nvar t1_v_len number;\nbegin\n select data_length into :t1_v_len\n from user_tab_columns\n where table_name = 'T1' and column_name = 'V';\nend;\n/\ninsert into t1 select substr(v, 1, :t1_v_len) from t2;\n
\nSomething similar could be done in other set-ups, it depends where the insert is being performed.
\n
soup wrap:
%TYPE is only available in PL/SQL, and can only be used in the declaration section of a block. So, you can't do what you're attempting.
You might think you could declare your own PL/SQL (sub)type and use that in the statement:
declare
subtype my_type is t1.v%type;
begin
insert into t1 select cast(v as my_type) from t2;
end;
/
... but that also won't work, because cast() is an SQL function not a PL/SQL one, and only recognises built-in and schema-level collection types; and you can't create an SQL type using the %TYPE either.
As a nasty hack, you could do something like:
insert into t1 select substr(v, 1,
select data_length
from user_tab_columns
where table_name = 'T1'
and column_name = 'V') from t2;
Which would be slightly more palatable if you could have that length stored in a variable - a substitution or bind variable in SQL*Plus, or a local variable in PL/SQL. For example, if it's a straight SQL update through SQL*Plus you could use a bind variable:
var t1_v_len number;
begin
select data_length into :t1_v_len
from user_tab_columns
where table_name = 'T1' and column_name = 'V';
end;
/
insert into t1 select substr(v, 1, :t1_v_len) from t2;
Something similar could be done in other set-ups, it depends where the insert is being performed.
qid & accept id:
(12989520, 12989554)
query:
Update text of column
soup:
Try this one,
\nupdate tab \nset mytext = concat('text none, ', Replace(mytext, 'text none',''));\n
\nSQLFiddle Demo
\nor simply do replace if you don't have any special reason to use concat
\nupdate tab \nset mytext = Replace(mytext, 'text none','text none, ');\n
\n
soup wrap:
Try this one,
update tab
set mytext = concat('text none, ', Replace(mytext, 'text none',''));
SQLFiddle Demo
or simply do replace if you don't have any special reason to use concat
update tab
set mytext = Replace(mytext, 'text none','text none, ');
qid & accept id:
(13003656, 13003667)
query:
SQL GROUP BY and a condition on COUNT
soup:
Use a HAVING clause to filter an aggregated column.
\nSELECT id, count(oID) \nFROM MyTable \nGROUP BY oID \nHAVING count(oID) = 1\n
\nUPDATE 1
\nwrap the results in a subquery
\nSELECT a.*\nFROM tableName a INNER JOIN\n (\n SELECT id \n FROM MyTable \n GROUP BY id \n HAVING count(oID) = 1\n ) b ON a.ID = b.ID\n
\n
soup wrap:
Use a HAVING clause to filter an aggregated column.
SELECT id, count(oID)
FROM MyTable
GROUP BY oID
HAVING count(oID) = 1
UPDATE 1
wrap the results in a subquery
SELECT a.*
FROM tableName a INNER JOIN
(
SELECT id
FROM MyTable
GROUP BY id
HAVING count(oID) = 1
) b ON a.ID = b.ID
qid & accept id:
(13024512, 13024731)
query:
How to return requested results?
soup:
If you want to show all regions, and within each region to count the number with populations greater than 10 million, then probably this is easiest:
\nSELECT region, SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries\nFROM bbc\nGROUP BY region\n
\nSo if you have a region where no countries have a population greater than 10000000, you'll still have a row with that region name and a 0.
\n
\nFrom your comments to @Yograj Gupta question - if you want regions where all countries have populations > 10000000, then you can either modify the above:
\nSELECT region, COUNT(*) as Cnt,SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries\nFROM bbc\nGROUP BY region\nHAVING COUNT(*) = SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END)\n
\nOr just exploit a simpler property:
\nSELECT region, COUNT(*) as Cnt,MIN(population) as LowestPop\nFROM bbc\nGROUP BY region\nHAVING MIN(population) > 10000000\n
\nwhere the minimum population for any country in the region is > 10000000, then all countries must have a population > 10000000
\n
soup wrap:
If you want to show all regions, and within each region to count the number with populations greater than 10 million, then probably this is easiest:
SELECT region, SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries
FROM bbc
GROUP BY region
So if you have a region where no countries have a population greater than 10000000, you'll still have a row with that region name and a 0.
From your comments to @Yograj Gupta question - if you want regions where all countries have populations > 10000000, then you can either modify the above:
SELECT region, COUNT(*) as Cnt,SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries
FROM bbc
GROUP BY region
HAVING COUNT(*) = SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END)
Or just exploit a simpler property:
SELECT region, COUNT(*) as Cnt,MIN(population) as LowestPop
FROM bbc
GROUP BY region
HAVING MIN(population) > 10000000
where the minimum population for any country in the region is > 10000000, then all countries must have a population > 10000000
qid & accept id:
(13054785, 13054905)
query:
How to update selective rows in a table in sql server?
soup:
Okay the query should look like this, to update items 1,2,3,4:
\n UPDATE Items\n SET bitIsTab = 1\n WHERE ReqID IN (1,2,3,4);\n
\nIt can however be done using Linq:
\nList selectedIds = { 1, 2, 3, 4 };\nvar itemsToBeUpdated = (from i in yourContext.Items \n where selectedIds.Contains(i.ReqID)\n select i);\nitemsToBeUpdated.ForEach(i=>i.bitIsTab = 1);\nyourContext.SubmitChanges();\n
\nOr you could use a VARCHAR in your stored procedure:
\nCREATE PROCEDURE sp_setTabItems\n @ids varchar(500) AS\n UPDATE Items\n SET bitIsTab = 1\n WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;\n
\nAnd then use "1,2,3,4" as your stored procedure parameter.
\nTo execute the stored procedure:
\n EXEC sp_setTabItems '1,2,3,4'\n
\nCould also be done in a more reusable way, with the bitIsTab as a parameter:
\nCREATE PROCEDURE sp_setTabItems\n @isTab bit,\n @ids varchar(500) AS\n UPDATE Items\n SET bitIsTab = @isTab \n WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;\n
\nAnd executed this way:
\nEXEC sp_setTabItems '1,2,3,4',1\n
\nI updated the stored procedure solution, since comparing a INT with a VARCHAR won't work with the EXEC.
\n
soup wrap:
Okay the query should look like this, to update items 1,2,3,4:
UPDATE Items
SET bitIsTab = 1
WHERE ReqID IN (1,2,3,4);
It can however be done using Linq:
List selectedIds = { 1, 2, 3, 4 };
var itemsToBeUpdated = (from i in yourContext.Items
where selectedIds.Contains(i.ReqID)
select i);
itemsToBeUpdated.ForEach(i=>i.bitIsTab = 1);
yourContext.SubmitChanges();
Or you could use a VARCHAR in your stored procedure:
CREATE PROCEDURE sp_setTabItems
@ids varchar(500) AS
UPDATE Items
SET bitIsTab = 1
WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;
And then use "1,2,3,4" as your stored procedure parameter.
To execute the stored procedure:
EXEC sp_setTabItems '1,2,3,4'
Could also be done in a more reusable way, with the bitIsTab as a parameter:
CREATE PROCEDURE sp_setTabItems
@isTab bit,
@ids varchar(500) AS
UPDATE Items
SET bitIsTab = @isTab
WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;
And executed this way:
EXEC sp_setTabItems '1,2,3,4',1
I updated the stored procedure solution, since comparing a INT with a VARCHAR won't work with the EXEC.
qid & accept id:
(13055295, 13055403)
query:
Increment value in SQL SELECT statement
soup:
You can try this
\nselect\n td.DocID, td.FullName, td.DocContRole,\n row_number() over (partition by td.DocID, td.DocContRole order by td.FullName) as NumRole\nfrom dbo.#TempDoc_DocContRoles as td\n
\nSo dynamic SQL will be smth like that
\n\ncreate table #t2\n(\n DocID int, FullName nvarchar(max), \n NumRole nvarchar(max)\n)\n\ndeclare @pivot_columns nvarchar(max), @stmt nvarchar(max)\n\ninsert into #t2\nselect\n td.DocID, td.FullName,\n td.DocContRole + \n cast(\n row_number() over \n (partition by td.DocID, td.DocContRole order by td.FullName)\n as nvarchar(max)) as NumRole\nfrom t as td\n\nselect\n @pivot_columns = \n isnull(@pivot_columns + ', ', '') + \n '[' + NumRole + ']'\nfrom (select distinct NumRole from #t2) as T\n\nselect @stmt = '\nselect *\nfrom #t2 as t\npivot\n(\nmin(FullName)\nfor NumRole in (' + @pivot_columns + ')\n) as PT'\n\nexec sp_executesql\n @stmt = @stmt\n
\n
soup wrap:
You can try this
select
td.DocID, td.FullName, td.DocContRole,
row_number() over (partition by td.DocID, td.DocContRole order by td.FullName) as NumRole
from dbo.#TempDoc_DocContRoles as td
So dynamic SQL will be smth like that
create table #t2
(
DocID int, FullName nvarchar(max),
NumRole nvarchar(max)
)
declare @pivot_columns nvarchar(max), @stmt nvarchar(max)
insert into #t2
select
td.DocID, td.FullName,
td.DocContRole +
cast(
row_number() over
(partition by td.DocID, td.DocContRole order by td.FullName)
as nvarchar(max)) as NumRole
from t as td
select
@pivot_columns =
isnull(@pivot_columns + ', ', '') +
'[' + NumRole + ']'
from (select distinct NumRole from #t2) as T
select @stmt = '
select *
from #t2 as t
pivot
(
min(FullName)
for NumRole in (' + @pivot_columns + ')
) as PT'
exec sp_executesql
@stmt = @stmt
qid & accept id:
(13068001, 13068383)
query:
update each row with different values in temp table
soup:
SQL Server Solution
\nThis query will sequentially take the values from the temp table and update the code in the example table in round robin fashion, repeating the values from temp when required.
\nupdate e\nset code = t.code\nfrom example e\njoin temp t on t.id = (e.id -1) % (select count(*) from temp) + 1\n
\nIf the ids are not sequential in either table, then you can row_number() them first, e.g.
\nupdate e\nset code = t.code\nfrom (select *,rn=row_number() over (order by id) from example) e\njoin (select *,rn=row_number() over (order by id) from temp) t\n on t.rn = (e.rn -1) % (select count(*) from temp) + 1\n
\nThe same technique (mod, row-number) can be used in other RDBMS, but the syntax will differ a little.
\n
soup wrap:
SQL Server Solution
This query will sequentially take the values from the temp table and update the code in the example table in round robin fashion, repeating the values from temp when required.
update e
set code = t.code
from example e
join temp t on t.id = (e.id -1) % (select count(*) from temp) + 1
If the ids are not sequential in either table, then you can row_number() them first, e.g.
update e
set code = t.code
from (select *,rn=row_number() over (order by id) from example) e
join (select *,rn=row_number() over (order by id) from temp) t
on t.rn = (e.rn -1) % (select count(*) from temp) + 1
The same technique (mod, row-number) can be used in other RDBMS, but the syntax will differ a little.
qid & accept id:
(13069202, 13069340)
query:
Regexp_like with placeholders perl
soup:
I think you should be able to do:
\nselect id_name from name_table where regexp_like(name, ?);\n
\nIf only part of the regexp comes from the placeholder, use string concatenation:
\nselect id_name from name_table where regexp_like(name, ? || '[a-z]$');\n
\n
soup wrap:
I think you should be able to do:
select id_name from name_table where regexp_like(name, ?);
If only part of the regexp comes from the placeholder, use string concatenation:
select id_name from name_table where regexp_like(name, ? || '[a-z]$');
qid & accept id:
(13080106, 13080121)
query:
How to combine these queries that group by the same field?
soup:
If you have only three possible values of cache, you can use this,
\nSELECT DATE(datetime) as datetime,\n SUM(CASE WHEN cached = 'a' THEN 1 ELSE 0 END) cached_a,\n SUM(CASE WHEN cached = 'b' THEN 1 ELSE 0 END) cached_b,\n SUM(CASE WHEN cached = 'c' THEN 1 ELSE 0 END) cached_c\nFROM requests\nGROUP BY DAY(datetime)\n
\notherwise, if you have multiple number of cache, you can use Prepared Statement
\nSET @sql = NULL;\nSELECT\n GROUP_CONCAT(DISTINCT\n CONCAT(\n 'SUM(CASE WHEN cached = ''',\n cached,\n ''' then 1 ELSE 0 end) AS ',\n CONCAT('cached_',cached)\n )\n ) INTO @sql\nFROM requests;\n\nSET @sql = CONCAT('SELECT DATE(datetime) as datetime, ', @sql, ' \n FROM requests \n GROUP BY DAY(datetime)');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
\n
soup wrap:
If you have only three possible values of cache, you can use this,
SELECT DATE(datetime) as datetime,
SUM(CASE WHEN cached = 'a' THEN 1 ELSE 0 END) cached_a,
SUM(CASE WHEN cached = 'b' THEN 1 ELSE 0 END) cached_b,
SUM(CASE WHEN cached = 'c' THEN 1 ELSE 0 END) cached_c
FROM requests
GROUP BY DAY(datetime)
otherwise, if you have multiple number of cache, you can use Prepared Statement
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'SUM(CASE WHEN cached = ''',
cached,
''' then 1 ELSE 0 end) AS ',
CONCAT('cached_',cached)
)
) INTO @sql
FROM requests;
SET @sql = CONCAT('SELECT DATE(datetime) as datetime, ', @sql, '
FROM requests
GROUP BY DAY(datetime)');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
qid & accept id:
(13096793, 13096833)
query:
SQL query to get total amount from 2 table and sort by date
soup:
SELECT COALESCE(o.date, p.date) date, Sales, Purchases\n FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o\n FULL JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p\n ON o.date = p.date\n ORDER BY date\n
\nMySQL doesn't support FULL JOIN, so specifically for MySQL, you can use
\n SELECT o.date, Sales, Purchases\n FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o\n LEFT JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p\n ON o.date = p.date\n UNION ALL\n SELECT date, NULL, SUM(amount) Purchases\n FROM PurchaseOrder p2\n WHERE NOT EXISTS (SELECT *\n FROM CustomerOrder o2\n WHERE o2.date = p2.date)\n GROUP BY date\n ORDER BY date\n
\n
soup wrap:
SELECT COALESCE(o.date, p.date) date, Sales, Purchases
FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o
FULL JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p
ON o.date = p.date
ORDER BY date
MySQL doesn't support FULL JOIN, so specifically for MySQL, you can use
SELECT o.date, Sales, Purchases
FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o
LEFT JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p
ON o.date = p.date
UNION ALL
SELECT date, NULL, SUM(amount) Purchases
FROM PurchaseOrder p2
WHERE NOT EXISTS (SELECT *
FROM CustomerOrder o2
WHERE o2.date = p2.date)
GROUP BY date
ORDER BY date
qid & accept id:
(13103114, 13103164)
query:
T:SQL: select values from rows as columns
soup:
It's easy to do this without PIVOT keyword, just by grouping
\nselect\n P.ProfileID,\n min(case when PD.PropertyName = 'FirstName' then P.PropertyValue else null end) as FirstName,\n min(case when PD.PropertyName = 'LastName' then P.PropertyValue else null end) as LastName,\n min(case when PD.PropertyName = 'Salary' then P.PropertyValue else null end) as Salary\nfrom Profiles as P\n left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID\ngroup by P.ProfileID\n
\nyou can also do this with PIVOT keyword
\nselect\n *\nfrom\n(\n select P.ProfileID, P.PropertyValue, PD.PropertyName\n from Profiles as P\n left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID\n) as P\n pivot\n (\n min(P.PropertyValue)\n for P.PropertyName in ([FirstName], [LastName], [Salary])\n ) as PIV\n
\nUPDATE: For dynamic number of properties - take a look at Increment value in SQL SELECT statement
\n
soup wrap:
It's easy to do this without PIVOT keyword, just by grouping
select
P.ProfileID,
min(case when PD.PropertyName = 'FirstName' then P.PropertyValue else null end) as FirstName,
min(case when PD.PropertyName = 'LastName' then P.PropertyValue else null end) as LastName,
min(case when PD.PropertyName = 'Salary' then P.PropertyValue else null end) as Salary
from Profiles as P
left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID
group by P.ProfileID
you can also do this with PIVOT keyword
select
*
from
(
select P.ProfileID, P.PropertyValue, PD.PropertyName
from Profiles as P
left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID
) as P
pivot
(
min(P.PropertyValue)
for P.PropertyName in ([FirstName], [LastName], [Salary])
) as PIV
UPDATE: For dynamic number of properties - take a look at Increment value in SQL SELECT statement
qid & accept id:
(13110356, 13120794)
query:
Best way to store huge log data
soup:
Partitioning in postgresql works great for big logs. First create the parent table:
\ncreate table game_history_log (\n gameid integer,\n views integer,\n plays integer,\n likes integer,\n log_date date\n);\n
\nNow create the partitions. In this case one for each month, 900 k rows, would be good:
\ncreate table game_history_log_201210 (\n check (log_date between '2012-10-01' and '2012-10-31')\n) inherits (game_history_log);\n\ncreate table game_history_log_201211 (\n check (log_date between '2012-11-01' and '2012-11-30')\n) inherits (game_history_log);\n
\nNotice the check constraints in each partition. If you try to insert in the wrong partition:
\ninsert into game_history_log_201210 (\n gameid, views, plays, likes, log_date\n) values (1, 2, 3, 4, '2012-09-30');\nERROR: new row for relation "game_history_log_201210" violates check constraint "game_history_log_201210_log_date_check"\nDETAIL: Failing row contains (1, 2, 3, 4, 2012-09-30).\n
\nOne of the advantages of partitioning is that it will only search in the correct partition reducing drastically and consistently the search size regardless of how many years of data there is. Here the explain for the search for a certain date:
\nexplain\nselect *\nfrom game_history_log\nwhere log_date = date '2012-10-02';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------\n Result (cost=0.00..30.38 rows=9 width=20)\n -> Append (cost=0.00..30.38 rows=9 width=20)\n -> Seq Scan on game_history_log (cost=0.00..0.00 rows=1 width=20)\n Filter: (log_date = '2012-10-02'::date)\n -> Seq Scan on game_history_log_201210 game_history_log (cost=0.00..30.38 rows=8 width=20)\n Filter: (log_date = '2012-10-02'::date)\n
\nNotice that apart from the parent table it only scanned the correct partition. Obviously you can have indexes on the partitions to avoid a sequential scan.
\n\n
soup wrap:
Partitioning in postgresql works great for big logs. First create the parent table:
create table game_history_log (
gameid integer,
views integer,
plays integer,
likes integer,
log_date date
);
Now create the partitions. In this case one for each month, 900 k rows, would be good:
create table game_history_log_201210 (
check (log_date between '2012-10-01' and '2012-10-31')
) inherits (game_history_log);
create table game_history_log_201211 (
check (log_date between '2012-11-01' and '2012-11-30')
) inherits (game_history_log);
Notice the check constraints in each partition. If you try to insert in the wrong partition:
insert into game_history_log_201210 (
gameid, views, plays, likes, log_date
) values (1, 2, 3, 4, '2012-09-30');
ERROR: new row for relation "game_history_log_201210" violates check constraint "game_history_log_201210_log_date_check"
DETAIL: Failing row contains (1, 2, 3, 4, 2012-09-30).
One of the advantages of partitioning is that it will only search in the correct partition reducing drastically and consistently the search size regardless of how many years of data there is. Here the explain for the search for a certain date:
explain
select *
from game_history_log
where log_date = date '2012-10-02';
QUERY PLAN
------------------------------------------------------------------------------------------------------
Result (cost=0.00..30.38 rows=9 width=20)
-> Append (cost=0.00..30.38 rows=9 width=20)
-> Seq Scan on game_history_log (cost=0.00..0.00 rows=1 width=20)
Filter: (log_date = '2012-10-02'::date)
-> Seq Scan on game_history_log_201210 game_history_log (cost=0.00..30.38 rows=8 width=20)
Filter: (log_date = '2012-10-02'::date)
Notice that apart from the parent table it only scanned the correct partition. Obviously you can have indexes on the partitions to avoid a sequential scan.
qid & accept id:
(13128635, 13128831)
query:
Using a left join and checking if the row existed along with another check in where clause
soup:
According to this answer, in SQL-Server using NOT EXISTS is more efficient than LEFT JOIN/IS NULL
\nSELECT *\nFROM Users u\nWHERE u.IsActive = 1\nAND u.Status <> 'disabled'\nAND NOT EXISTS (SELECT 1 FROM Banned b WHERE b.UserID = u.UserID)\n
\nEDIT
\nFor the sake of completeness this is how I would do it with a LEFT JOIN:
\nSELECT *\nFROM Users u\n LEFT JOIN Banned b\n ON b.UserID = u.UserID\nWHERE u.IsActive = 1\nAND u.Status <> 'disabled'\nAND b.UserID IS NULL -- EXCLUDE ROWS WITH A MATCH IN `BANNED`\n
\n
soup wrap:
According to this answer, in SQL-Server using NOT EXISTS is more efficient than LEFT JOIN/IS NULL
SELECT *
FROM Users u
WHERE u.IsActive = 1
AND u.Status <> 'disabled'
AND NOT EXISTS (SELECT 1 FROM Banned b WHERE b.UserID = u.UserID)
EDIT
For the sake of completeness this is how I would do it with a LEFT JOIN:
SELECT *
FROM Users u
LEFT JOIN Banned b
ON b.UserID = u.UserID
WHERE u.IsActive = 1
AND u.Status <> 'disabled'
AND b.UserID IS NULL -- EXCLUDE ROWS WITH A MATCH IN `BANNED`
qid & accept id:
(13144230, 13144419)
query:
Divisioning of results of two select SQL-statements
soup:
You can create Views for things like this.
\ncreate view vResult1 as\nselect your(\n complicated(\n query(\n here()\n )\n )\n );\n\ncreate view vResult2 as\nselect another(\n complicated(\n query(\n here()\n )\n )\n );\n
\nThen you may run them:
\nselect vResult1/vResult2;\n
\nIf you need parameters for your complicated queries - you may use stored procedures.
\n
soup wrap:
You can create Views for things like this.
create view vResult1 as
select your(
complicated(
query(
here()
)
)
);
create view vResult2 as
select another(
complicated(
query(
here()
)
)
);
Then you may run them:
select vResult1/vResult2;
If you need parameters for your complicated queries - you may use stored procedures.
qid & accept id:
(13159227, 13162198)
query:
SQL Dynamic Columns
soup:
The basic syntax will be:
\nselect user,\n sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,\n sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,\n sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,\n sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,\n count(timediff) timediff\nfrom\n( \n \n) src\ngroup by user\n
\nHard-coded static version will be something similar to this:
\nselect user,\n sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,\n sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,\n sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,\n sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,\n count(timediff) timediff\nfrom\n( \n select u.loginid as user,\n b.name wrapupcode,\n time(age.`instime`) as initialtime,\n age.`ENDOFWRAPUPTIME` AS endofwrapup,\n count(timediff(age.`ENDOFWRAPUPTIME`, time(age.`instime`))) as timediff\n from agentcallinformation age\n left join `axpuser` u\n on age.userid = u.pkey\n left join `breakcode` b\n on age.wrapupcode = b.pkey\n and age.wrapupcode <> ''\n WHERE age.endofwrapuptime IS NOT null \n) src\ngroup by user\n
\nI changed the query to use JOIN syntax instead of the correlated subqueries.
\nIf you need a dynamic version, then you can use prepared statements:
\nSET @sql = NULL;\nSELECT\n GROUP_CONCAT(DISTINCT\n CONCAT(\n 'sum(case when wrapupcode = ''',\n name,\n ''' then 1 else 0 end) AS ',\n name\n )\n ) INTO @sql\nFROM breakcode;\n\nSET @sql = CONCAT('SELECT user, ', @sql, ' \n , count(timediff) timediff\n from\n ( \n select u.loginid as user,\n b.name wrapupcode,\n time(age.`instime`) as initialtime,\n age.`ENDOFWRAPUPTIME` AS endofwrapup,\n count(timediff(age.`ENDOFWRAPUPTIME`, time(age.`instime`))) as timediff\n from agentcallinformation age\n left join `axpuser` u\n on age.userid = u.pkey\n left join `breakcode` b\n on age.wrapupcode = b.pkey\n and age.wrapupcode <> ''\n WHERE age.endofwrapuptime IS NOT null \n ) src\n GROUP BY user');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
\n
soup wrap:
The basic syntax will be:
select user,
sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,
sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,
sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,
sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,
count(timediff) timediff
from
(
) src
group by user
Hard-coded static version will be something similar to this:
select user,
sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,
sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,
sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,
sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,
count(timediff) timediff
from
(
select u.loginid as user,
b.name wrapupcode,
time(age.`instime`) as initialtime,
age.`ENDOFWRAPUPTIME` AS endofwrapup,
count(timediff(age.`ENDOFWRAPUPTIME`, time(age.`instime`))) as timediff
from agentcallinformation age
left join `axpuser` u
on age.userid = u.pkey
left join `breakcode` b
on age.wrapupcode = b.pkey
and age.wrapupcode <> ''
WHERE age.endofwrapuptime IS NOT null
) src
group by user
I changed the query to use JOIN syntax instead of the correlated subqueries.
If you need a dynamic version, then you can use prepared statements:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'sum(case when wrapupcode = ''',
name,
''' then 1 else 0 end) AS ',
name
)
) INTO @sql
FROM breakcode;
SET @sql = CONCAT('SELECT user, ', @sql, '
, count(timediff) timediff
from
(
select u.loginid as user,
b.name wrapupcode,
time(age.`instime`) as initialtime,
age.`ENDOFWRAPUPTIME` AS endofwrapup,
count(timediff(age.`ENDOFWRAPUPTIME`, time(age.`instime`))) as timediff
from agentcallinformation age
left join `axpuser` u
on age.userid = u.pkey
left join `breakcode` b
on age.wrapupcode = b.pkey
and age.wrapupcode <> ''
WHERE age.endofwrapuptime IS NOT null
) src
GROUP BY user');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
qid & accept id:
(13173833, 13173914)
query:
How to have many column from just one column?
soup:
This should do it:
\nSELECT ID,\n StudentID,\n Mon,\n MAX(CASE WHEN Type LIKE 'Obtained%' THEN Value END) AS Obtained,\n MAX(CASE WHEN Type LIKE 'Benefit%' THEN Value END) AS Benefit,\n MAX(CASE WHEN Type LIKE 'Max%' THEN Value END) AS `Max`,\n CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END AS Type\nFROM T\nGROUP BY ID, StudentID, Mon, CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END\nORDER BY ID, StudentID, Mon, Type\n
\n\nAlthough it would make more sense to store type separately. i.e. have one column for "obtained", "max" etc and another column for "I", "II"
\nEDIT
\nWith your revised data structure this should work:
\nSELECT ID,\n StudentID,\n Mon,\n COALESCE(MAX(CASE WHEN Type IN (1, 7) THEN Value END), 0) AS Obtained,\n COALESCE(MAX(CASE WHEN Type IN (2, 8) THEN Value END), 0) AS Benefit,\n COALESCE(MAX(CASE WHEN Type IN (4, 10) THEN Value END), 0) AS `Max`,\n CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END AS Type\nFROM T\nWHERE Type IN (1, 2, 4, 7, 8, 10)\nGROUP BY ID, StudentID, Mon, CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END\nORDER BY ID, StudentID, Mon, Type\n
\n\n
soup wrap:
This should do it:
SELECT ID,
StudentID,
Mon,
MAX(CASE WHEN Type LIKE 'Obtained%' THEN Value END) AS Obtained,
MAX(CASE WHEN Type LIKE 'Benefit%' THEN Value END) AS Benefit,
MAX(CASE WHEN Type LIKE 'Max%' THEN Value END) AS `Max`,
CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END AS Type
FROM T
GROUP BY ID, StudentID, Mon, CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END
ORDER BY ID, StudentID, Mon, Type
Although it would make more sense to store type separately. i.e. have one column for "obtained", "max" etc and another column for "I", "II"
EDIT
With your revised data structure this should work:
SELECT ID,
StudentID,
Mon,
COALESCE(MAX(CASE WHEN Type IN (1, 7) THEN Value END), 0) AS Obtained,
COALESCE(MAX(CASE WHEN Type IN (2, 8) THEN Value END), 0) AS Benefit,
COALESCE(MAX(CASE WHEN Type IN (4, 10) THEN Value END), 0) AS `Max`,
CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END AS Type
FROM T
WHERE Type IN (1, 2, 4, 7, 8, 10)
GROUP BY ID, StudentID, Mon, CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END
ORDER BY ID, StudentID, Mon, Type
qid & accept id:
(13183568, 13184134)
query:
Database schema design for financial forecasting
soup:
I'd think it would be better to store each month's forecast in its own row in a table that looks like this
\nmonth forecast\n----- --------\n 1 30000\n 2 31000\n 3 28000\n ... ...\n 60 52000\n
\nThen you can use the aggregate functions to calculate forecast reports, discounted cash flow etc. ( Like if you want the un-discounted total for just 4 years): \nSELECT SUM(forecast) from FORECASTS where month=>1 and month<=48
\nFor salary expenses, I would think that having a view that does calculations on the fly (or if you DB engine supports "materialized views" should have sufficient performance unless we're talking some giant number of employees or really slow DB.
\nMaybe have a salary history table, that trigger populates when employee data changes/payroll runs
\nemployeeId month Salary\n---------- ----- ------\n 1 1 4000\n 2 1 3000\n 3 1 5000\n 1 2 4100\n 2 2 3100\n 3 2 4800\n ... ... ...\n
\nThen again, you can do SUM or other aggregate function to get to the reported data.
\n
soup wrap:
I'd think it would be better to store each month's forecast in its own row in a table that looks like this
month forecast
----- --------
1 30000
2 31000
3 28000
... ...
60 52000
Then you can use the aggregate functions to calculate forecast reports, discounted cash flow etc. ( Like if you want the un-discounted total for just 4 years):
SELECT SUM(forecast) from FORECASTS where month=>1 and month<=48
For salary expenses, I would think that having a view that does calculations on the fly (or if you DB engine supports "materialized views" should have sufficient performance unless we're talking some giant number of employees or really slow DB.
Maybe have a salary history table, that trigger populates when employee data changes/payroll runs
employeeId month Salary
---------- ----- ------
1 1 4000
2 1 3000
3 1 5000
1 2 4100
2 2 3100
3 2 4800
... ... ...
Then again, you can do SUM or other aggregate function to get to the reported data.
qid & accept id:
(13230133, 13230189)
query:
Selecting all uppercased-value rows of a table in SQL Navigator
soup:
I believe Oracle is case sensitive by default? If so, then this should work:
\nSELECT *\nFROM table_name\nWHERE LOWER(email) <> email\n
\nIf this works then you can simply update them with
\nUPDATE table_name\nSET email = LOWER(email)\nWHERE LOWER(email) <> email\n
\n
soup wrap:
I believe Oracle is case sensitive by default? If so, then this should work:
SELECT *
FROM table_name
WHERE LOWER(email) <> email
If this works then you can simply update them with
UPDATE table_name
SET email = LOWER(email)
WHERE LOWER(email) <> email
qid & accept id:
(13234818, 13261639)
query:
Formatting External tables in Greenplum (PostgreSQL)
soup:
It appears that you can:
\nSET DATESTYLE = 'YMD';\n
\nbefore SELECTing from the table. This will affect the interpretation of all dates, though, not just those from the file. If you consistently use unambiguous ISO dates elsewhere that will be fine, but it may be a problem if (for example) you need to also accept 'D/M/Y' date literals in the same query.
\nThis is specific to GreenPlum's CREATE EXTERNAL TABLE and does not apply to SQL-standard SQL/MED foreign data wrappers, as shown below.
\n
\nWhat surprises me is that PostgreSQL proper (which does not have this CREATE EXTERNAL TABLE feature) always accepts ISO-style YYYY-MM-DD and YYYYMMDD dates, irrespective of DATESTYLE. Observe:
\nregress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');\n date | date | current_setting \n------------+------------+-----------------\n 2012-12-29 | 2012-12-29 | ISO, MDY\n(1 row)\n\nregress=> SET DateStyle = 'DMY';\nSET\nregress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');\n date | date | current_setting \n------------+------------+-----------------\n 2012-12-29 | 2012-12-29 | ISO, DMY\n(1 row)\n
\n... so if GreenPlum behaved the same way, you should not need to do anything to get these YYYYMMDD dates to be read correctly from the input file.
\nHere's how it works with a PostgreSQL file_fdw SQL/MED foreign data wrapper:
\nCREATE EXTENSION file_fdw;\n\nCOPY (SELECT '20121229', '2012-12-29') TO '/tmp/dates.csv' CSV;\n\nSET DateStyle = 'DMY';\n\nCREATE SERVER csvtest FOREIGN DATA WRAPPER file_fdw;\n\nCREATE FOREIGN TABLE csvtest (\n date1 date,\n date2 date\n) SERVER csvtest OPTIONS ( filename '/tmp/dates.csv', format 'csv' );\n\nSELECT * FROM csvtest ;\n date1 | date2 \n------------+------------\n 2012-12-29 | 2012-12-29\n(1 row)\n
\nThe CSV file contents are:
\n20121229,2012-12-29\n
\nso you can see that Pg will always accept ISO dates for CSV, irrespective of datestyle.
\nIf GreenPlum doesn't, please file a bug. The idea of DateStyle changing the way a foreign table is read after creation is crazy.
\n
soup wrap:
It appears that you can:
SET DATESTYLE = 'YMD';
before SELECTing from the table. This will affect the interpretation of all dates, though, not just those from the file. If you consistently use unambiguous ISO dates elsewhere that will be fine, but it may be a problem if (for example) you need to also accept 'D/M/Y' date literals in the same query.
This is specific to GreenPlum's CREATE EXTERNAL TABLE and does not apply to SQL-standard SQL/MED foreign data wrappers, as shown below.
What surprises me is that PostgreSQL proper (which does not have this CREATE EXTERNAL TABLE feature) always accepts ISO-style YYYY-MM-DD and YYYYMMDD dates, irrespective of DATESTYLE. Observe:
regress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');
date | date | current_setting
------------+------------+-----------------
2012-12-29 | 2012-12-29 | ISO, MDY
(1 row)
regress=> SET DateStyle = 'DMY';
SET
regress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');
date | date | current_setting
------------+------------+-----------------
2012-12-29 | 2012-12-29 | ISO, DMY
(1 row)
... so if GreenPlum behaved the same way, you should not need to do anything to get these YYYYMMDD dates to be read correctly from the input file.
Here's how it works with a PostgreSQL file_fdw SQL/MED foreign data wrapper:
CREATE EXTENSION file_fdw;
COPY (SELECT '20121229', '2012-12-29') TO '/tmp/dates.csv' CSV;
SET DateStyle = 'DMY';
CREATE SERVER csvtest FOREIGN DATA WRAPPER file_fdw;
CREATE FOREIGN TABLE csvtest (
date1 date,
date2 date
) SERVER csvtest OPTIONS ( filename '/tmp/dates.csv', format 'csv' );
SELECT * FROM csvtest ;
date1 | date2
------------+------------
2012-12-29 | 2012-12-29
(1 row)
The CSV file contents are:
20121229,2012-12-29
so you can see that Pg will always accept ISO dates for CSV, irrespective of datestyle.
If GreenPlum doesn't, please file a bug. The idea of DateStyle changing the way a foreign table is read after creation is crazy.
qid & accept id:
(13237623, 13237661)
query:
Copy data into another table
soup:
If both tables are truly the same schema:
\nINSERT INTO newTable\nSELECT * FROM oldTable\n
\nOtherwise, you'll have to specify the column names (the column list for newTable is optional if you are specifying a value for all columns and selecting columns in the same order as newTable's schema):
\nINSERT INTO newTable (col1, col2, col3)\nSELECT column1, column2, column3\nFROM oldTable\n
\n
soup wrap:
If both tables are truly the same schema:
INSERT INTO newTable
SELECT * FROM oldTable
Otherwise, you'll have to specify the column names (the column list for newTable is optional if you are specifying a value for all columns and selecting columns in the same order as newTable's schema):
INSERT INTO newTable (col1, col2, col3)
SELECT column1, column2, column3
FROM oldTable
qid & accept id:
(13241518, 13242284)
query:
sql query including month columns?
soup:
Try This:
\n--setup\ncreate table #fa00100 (assetId int, assetindex int, acquisitionCost int, dateAcquired date)\ncreate table #fa00200 (assetIndex int, moDepreciateRate int, fullyDeprFlag nchar(1), fullyDeprFlagBit bit)\n\ninsert #fa00100 \n select 1, 1, 100, '2012-01-09'\nunion select 2, 2, 500, '2012-05-09'\ninsert #fa00200\n select 1, 10, 'N', 0\nunion select 2, 15, 'Y', 1\n
\n.
\n--solution\ncreate table #dates (d date not null primary key clustered)\ndeclare @sql nvarchar(max)\n, @pivotCols nvarchar(max)\n, @thisMonth date\n, @noMonths int = 4\n\nset @thisMonth = cast(1 + GETUTCDATE() - DAY(getutcdate()) as date)\nselect @thisMonth\nwhile @noMonths > 0\nbegin\n insert #dates select DATEADD(month,@noMonths,@thisMonth) \n set @noMonths = @noMonths - 1\nend\n\nselect @sql = ISNULL(@sql + NCHAR(10) + ',', '') \n--+ ' A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) ' --Original Line\n + ' case when A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) <= 0 then 0 else A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) end ' --new version\n\n+ quotename(DATENAME(month, d) + '_' + right(cast(10000 + YEAR(d) as nvarchar(5)),4))\nfrom #dates\n\nset @sql = 'select A.assetid\n, A.acquisitionCost\n, B.moDepreciateRate \n,' + @sql + '\nfrom #fa00100 A\ninner join #fa00200 B \n on A.assetindex = B.assetindex\nwhere B.fullyDeprFlag = ''N''\nand B.fullyDeprFlagBit = 0\n'\n--nb: B.fullyDeprFlag = ''N'' has double quotes to avoid the quotes from terminating the string\n--I've also included fullyDeprFlagBit to show how the SQL would look if you had a bit column - that will perform much better and will save space over using a character column\n\nprint @sql\nexec(@sql)\n\ndrop table #dates \n
\n.
\n --remove temp tables from setup\ndrop table #fa00100\ndrop table #fa00200\n
\n
soup wrap:
Try This:
--setup
create table #fa00100 (assetId int, assetindex int, acquisitionCost int, dateAcquired date)
create table #fa00200 (assetIndex int, moDepreciateRate int, fullyDeprFlag nchar(1), fullyDeprFlagBit bit)
insert #fa00100
select 1, 1, 100, '2012-01-09'
union select 2, 2, 500, '2012-05-09'
insert #fa00200
select 1, 10, 'N', 0
union select 2, 15, 'Y', 1
.
--solution
create table #dates (d date not null primary key clustered)
declare @sql nvarchar(max)
, @pivotCols nvarchar(max)
, @thisMonth date
, @noMonths int = 4
set @thisMonth = cast(1 + GETUTCDATE() - DAY(getutcdate()) as date)
select @thisMonth
while @noMonths > 0
begin
insert #dates select DATEADD(month,@noMonths,@thisMonth)
set @noMonths = @noMonths - 1
end
select @sql = ISNULL(@sql + NCHAR(10) + ',', '')
--+ ' A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) ' --Original Line
+ ' case when A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) <= 0 then 0 else A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) end ' --new version
+ quotename(DATENAME(month, d) + '_' + right(cast(10000 + YEAR(d) as nvarchar(5)),4))
from #dates
set @sql = 'select A.assetid
, A.acquisitionCost
, B.moDepreciateRate
,' + @sql + '
from #fa00100 A
inner join #fa00200 B
on A.assetindex = B.assetindex
where B.fullyDeprFlag = ''N''
and B.fullyDeprFlagBit = 0
'
--nb: B.fullyDeprFlag = ''N'' has double quotes to avoid the quotes from terminating the string
--I've also included fullyDeprFlagBit to show how the SQL would look if you had a bit column - that will perform much better and will save space over using a character column
print @sql
exec(@sql)
drop table #dates
.
--remove temp tables from setup
drop table #fa00100
drop table #fa00200
qid & accept id:
(13249903, 13250303)
query:
MySQL select multiple rows by referencing to one data field
soup:
Sounds like you want this:
\nselect model_id\nfrom yourtable\nwhere property in (1, 3)\ngroup by model_id\nhaving count(*) > 1;\n
\n\nOr you can use the following:
\nselect model_id\nfrom yourtable t1\nwhere property = 1\n and exists (select model_id\n from yourtable t2\n where t1.model_id = t2.model_id\n and property = 3)\n
\n\n
soup wrap:
Sounds like you want this:
select model_id
from yourtable
where property in (1, 3)
group by model_id
having count(*) > 1;
Or you can use the following:
select model_id
from yourtable t1
where property = 1
and exists (select model_id
from yourtable t2
where t1.model_id = t2.model_id
and property = 3)
qid & accept id:
(13277973, 13278064)
query:
SQL average number of requests per user over time period
soup:
Something like SUM feature will work. Might be a little slow.
\nSELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date` BETWEEN `first-date YYYY-MM-DD` AND `second-date YYYY-MM-DD`; \n
\n\nI would also recommend, if you have a lot of request, to have one row per user per day and just update the request total for that user.
\nEdit: If you want the last 30 days something like this query should work. It worked on my test table.
\n SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date`BETWEEN curdate() - INTERVAL 30 DAY AND curdate();\n
\n
soup wrap:
Something like SUM feature will work. Might be a little slow.
SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date` BETWEEN `first-date YYYY-MM-DD` AND `second-date YYYY-MM-DD`;
I would also recommend, if you have a lot of request, to have one row per user per day and just update the request total for that user.
Edit: If you want the last 30 days something like this query should work. It worked on my test table.
SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date`BETWEEN curdate() - INTERVAL 30 DAY AND curdate();
qid & accept id:
(13281693, 13281782)
query:
Comparing number in formatted string in MySQL?
soup:
Because your numbers are zero padded, as long as the four letter prefix is the same and always the same length, then this should work as MySQL will do a lexicographical comparison.
\nNote that one less 0 in the padding will cause this to fail:
\nSET @policy1 = 'XXXX-00099';\nSET @policy2 = 'XXXX-000598';\nSELECT @policy1, @policy2, @policy1 > @policy2 AS comparison;\n=========================================\n> 'XXXX-00099', 'XXXX-000598', 1\n
\nIf you need to truly compare the numbers at the end, you will need to parse them out and cast them:
\nSET @policy1 = 'XXXX-00099';\nSET @policy2 = 'XXXX-000598';\nSELECT @policy1, @policy2, \n CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) >\n CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) AS comparison;\n=========================================\n> 'XXXX-00099', 'XXXX-000598', 0\n
\n
soup wrap:
Because your numbers are zero padded, as long as the four letter prefix is the same and always the same length, then this should work as MySQL will do a lexicographical comparison.
Note that one less 0 in the padding will cause this to fail:
SET @policy1 = 'XXXX-00099';
SET @policy2 = 'XXXX-000598';
SELECT @policy1, @policy2, @policy1 > @policy2 AS comparison;
=========================================
> 'XXXX-00099', 'XXXX-000598', 1
If you need to truly compare the numbers at the end, you will need to parse them out and cast them:
SET @policy1 = 'XXXX-00099';
SET @policy2 = 'XXXX-000598';
SELECT @policy1, @policy2,
CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) >
CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) AS comparison;
=========================================
> 'XXXX-00099', 'XXXX-000598', 0
qid & accept id:
(13308281, 13308310)
query:
MySQL GROUP BY "and filter"
soup:
SELECT name, GROUP_CONCAT(number)\nFROM objects\nWHERE number IN (2,3)\nGROUP BY name\nHAVING COUNT(*) = 2\n
\n\nor if you want to retain all value on which the name has,
\nSELECT a.name, GROUP_CONCAT(A.number)\nFROM objects a\n INNER JOIN\n (\n SELECT name\n FROM objects\n WHERE number IN (2,3)\n GROUP BY name\n HAVING COUNT(*) = 2\n ) b ON a.Name = b.Name\nGROUP BY a.name\n
\n\n
soup wrap:
SELECT name, GROUP_CONCAT(number)
FROM objects
WHERE number IN (2,3)
GROUP BY name
HAVING COUNT(*) = 2
or if you want to retain all value on which the name has,
SELECT a.name, GROUP_CONCAT(A.number)
FROM objects a
INNER JOIN
(
SELECT name
FROM objects
WHERE number IN (2,3)
GROUP BY name
HAVING COUNT(*) = 2
) b ON a.Name = b.Name
GROUP BY a.name
qid & accept id:
(13345583, 13345825)
query:
oracle - how to list out the products that are going to expire in 2months time?
soup:
you data says that the others apart from pear+orange expire today, so assuming you want to exclude expiring today and include those expiring WITHIN 2 months time:
\nSQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where add_months(manufacturedate, 12) <= add_months(trunc(sysdate), 2) and add_months(manufacturedate, 12) > trunc(sysdate);\n\nFOOD MANUFACTU EXPIRY_DA\n--------------- --------- ---------\norange 12-JAN-12 12-JAN-13\npear 12-JAN-12 12-JAN-13\n
\nor a more index friendly way of putting it (removing the functions on the column side):
\nSQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where manufacturedate <= add_months(trunc(sysdate), -10) and manufacturedate > add_months(trunc(sysdate), -12);\n\nFOOD MANUFACTU EXPIRY_DA\n--------------- --------- ---------\norange 12-JAN-12 12-JAN-13\npear 12-JAN-12 12-JAN-13\n
\n
soup wrap:
you data says that the others apart from pear+orange expire today, so assuming you want to exclude expiring today and include those expiring WITHIN 2 months time:
SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where add_months(manufacturedate, 12) <= add_months(trunc(sysdate), 2) and add_months(manufacturedate, 12) > trunc(sysdate);
FOOD MANUFACTU EXPIRY_DA
--------------- --------- ---------
orange 12-JAN-12 12-JAN-13
pear 12-JAN-12 12-JAN-13
or a more index friendly way of putting it (removing the functions on the column side):
SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where manufacturedate <= add_months(trunc(sysdate), -10) and manufacturedate > add_months(trunc(sysdate), -12);
FOOD MANUFACTU EXPIRY_DA
--------------- --------- ---------
orange 12-JAN-12 12-JAN-13
pear 12-JAN-12 12-JAN-13
qid & accept id:
(13377997, 13382350)
query:
Join and Union with Entity Framework
soup:
If i understand correctly,
\nCustomer may or may not have the email (Additional) in emails table.\nAlso, Customer have more than one additional emails entry in emails table. Like below
\nList customers = new List \n{ \n new Customer { ClientId = 1, Email = "client1@domain.com", Credits = 2 },\n new Customer { ClientId = 2, Email = "client2@domain.com", Credits = 1 },\n new Customer { ClientId = 3, Email = "client3@domain.com", Credits = 1 },\n};\n\nList emails = new List \n{ \n new Emails { ClientId = 1, Email = "client1-2@domain.com" },\n new Emails { ClientId = 1, Email = "client1-3@domain.com" },\n new Emails { ClientId = 2, Email = "client2-1@domain.com" },\n};\n
\nIn that case, Use the below query to get it done,
\nvar result = from c in customers\n let _emails = emails.Where(e => c.ClientId == e.ClientId).Select(t => t.Email)\n where c.Email == "client3@domain.com" || _emails.Contains("client3@domain.com")\n select new\n {\n Allowed = c.Credits > 0,\n MainEmail = c.Email\n };\n
\nI hope it helps you.
\n
soup wrap:
If i understand correctly,
Customer may or may not have the email (Additional) in emails table.
Also, Customer have more than one additional emails entry in emails table. Like below
List customers = new List
{
new Customer { ClientId = 1, Email = "client1@domain.com", Credits = 2 },
new Customer { ClientId = 2, Email = "client2@domain.com", Credits = 1 },
new Customer { ClientId = 3, Email = "client3@domain.com", Credits = 1 },
};
List emails = new List
{
new Emails { ClientId = 1, Email = "client1-2@domain.com" },
new Emails { ClientId = 1, Email = "client1-3@domain.com" },
new Emails { ClientId = 2, Email = "client2-1@domain.com" },
};
In that case, Use the below query to get it done,
var result = from c in customers
let _emails = emails.Where(e => c.ClientId == e.ClientId).Select(t => t.Email)
where c.Email == "client3@domain.com" || _emails.Contains("client3@domain.com")
select new
{
Allowed = c.Credits > 0,
MainEmail = c.Email
};
I hope it helps you.
qid & accept id:
(13406949, 13408736)
query:
Formatting a number as a monetary value including separators
soup:
Do it on the client side. Having said that, this example should show you the way.
\nwith p(price1, multiplier) as (select 1234.5, 10)\nselect '$' + replace(cast((CAST(p.Price1 AS decimal(10,2)) * cast(isnull(p.Multiplier,1) as decimal(10,2))) as varchar), '.0000', ''),\n '$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)\nfrom p\n
\nThe key is in the last expression
\n'$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)\n
\nNote: if p.price1 is of a higher precision than decimal(10,2), then you may have to cast it in the expression as well to produce a faithful translation since the original CAST(p.Priced1 as decimal(10,2)) will be performing rounding.
\n
soup wrap:
Do it on the client side. Having said that, this example should show you the way.
with p(price1, multiplier) as (select 1234.5, 10)
select '$' + replace(cast((CAST(p.Price1 AS decimal(10,2)) * cast(isnull(p.Multiplier,1) as decimal(10,2))) as varchar), '.0000', ''),
'$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)
from p
The key is in the last expression
'$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)
Note: if p.price1 is of a higher precision than decimal(10,2), then you may have to cast it in the expression as well to produce a faithful translation since the original CAST(p.Priced1 as decimal(10,2)) will be performing rounding.
qid & accept id:
(13410246, 13413778)
query:
syntax to query another table using relationship in ORM?
soup:
There are various ways to achieve that:
\n1. use join(...) - I would opt for this one in your case
\nqry = session.query(Sample).join(Cell).filter(Cell.name == "a_string")\n\n>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id\n>> FROM sample JOIN cell ON cell.id = sample.factor_id\n>> WHERE cell.name = :name_1\n
\n2. use any/has(...) - this will use a sub-query
\nqry = session.query(Sample).filter(Sample.cell.has(Cell.name == "a_string"))\n\n>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id\n>> FROM sample\n>> WHERE EXISTS (SELECT 1\n>> FROM cell\n>> WHERE cell.id = sample.factor_id AND cell.name = :name_1)\n
\n
soup wrap:
There are various ways to achieve that:
1. use join(...) - I would opt for this one in your case
qry = session.query(Sample).join(Cell).filter(Cell.name == "a_string")
>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id
>> FROM sample JOIN cell ON cell.id = sample.factor_id
>> WHERE cell.name = :name_1
2. use any/has(...) - this will use a sub-query
qry = session.query(Sample).filter(Sample.cell.has(Cell.name == "a_string"))
>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id
>> FROM sample
>> WHERE EXISTS (SELECT 1
>> FROM cell
>> WHERE cell.id = sample.factor_id AND cell.name = :name_1)
qid & accept id:
(13419701, 13420383)
query:
Compare two sets of an SQL "GROUP BY" result
soup:
I assumed a TrainRoutes table with one row for each of R1, R2 etc. You could replace this with select distinct RouteID from Stops if required.
\nSelect\n r1.RouteID Route1,\n r2.RouteID Route2\nFrom\n -- cross to compare each route with each route\n dbo.TrainRoutes r1\n Cross Join\n dbo.TrainRoutes r2\n Inner Join\n dbo.Stops s1\n On r1.RouteID = s1.RouteID\n Inner Join\n dbo.Stops s2\n On r2.RouteID = s2.RouteID\nWhere\n r1.RouteID < r2.RouteID -- no point in comparing R1 with R2 and R2 with R1\nGroup By\n r1.RouteID,\n r2.RouteID\nHaving\n -- check each route has the same number of stations\n count(Distinct s1.stationID) = count(Distinct s2.stationID) And\n -- check each route has the same stops\n Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And\n -- check each route has different halts\n sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)\n
\nYou can also do this without the TrainRoute table like so, but you're now cross joining two larger tables:
\nSelect\n s1.RouteID Route1,\n s2.RouteID Route2\nFrom\n dbo.Stops s1\n Cross Join\n dbo.Stops s2\nWhere\n s1.RouteID < s2.RouteID\nGroup By\n s1.RouteID,\n s2.RouteID\nHaving\n count(Distinct s1.stationID) = count(Distinct s2.stationID) And\n Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And\n sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)\n
\nhttp://sqlfiddle.com/#!6/76978/8
\n
soup wrap:
I assumed a TrainRoutes table with one row for each of R1, R2 etc. You could replace this with select distinct RouteID from Stops if required.
Select
r1.RouteID Route1,
r2.RouteID Route2
From
-- cross to compare each route with each route
dbo.TrainRoutes r1
Cross Join
dbo.TrainRoutes r2
Inner Join
dbo.Stops s1
On r1.RouteID = s1.RouteID
Inner Join
dbo.Stops s2
On r2.RouteID = s2.RouteID
Where
r1.RouteID < r2.RouteID -- no point in comparing R1 with R2 and R2 with R1
Group By
r1.RouteID,
r2.RouteID
Having
-- check each route has the same number of stations
count(Distinct s1.stationID) = count(Distinct s2.stationID) And
-- check each route has the same stops
Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And
-- check each route has different halts
sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)
You can also do this without the TrainRoute table like so, but you're now cross joining two larger tables:
Select
s1.RouteID Route1,
s2.RouteID Route2
From
dbo.Stops s1
Cross Join
dbo.Stops s2
Where
s1.RouteID < s2.RouteID
Group By
s1.RouteID,
s2.RouteID
Having
count(Distinct s1.stationID) = count(Distinct s2.stationID) And
Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And
sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)
http://sqlfiddle.com/#!6/76978/8
qid & accept id:
(13427389, 13427423)
query:
Recipe Database, search by ingredient
soup:
Since a recipe can use multiple ingredients and you are looking for recipes that use one or more of the ingredients specified, you should use the DISTINCT keyword to prevent duplicate results where a recipe is using more than one ingredient from the list specified. Also, you can use IN clause to filter on multiple ingredient IDs.
\nselect DISTINCT r.name\nfrom \n recipes r\n inner join ingredient_index i\n on i.recipe_id = r.recipe_id\nwhere i.ingredient_id IN (7, 5);\n
\nAlternatively, if you are looking for recipes that are using all the ingredients specified in the list, then you can group the results by recipe name and check if the count of records is same as the number of ingredients in your list.
\nselect r.name\nfrom \n recipes r\n inner join ingredient_index i\n on i.recipe_id = r.recipe_id\nwhere i.ingredient_id IN (7, 5)\nGROUP BY r.name\nHAVING COUNT(*) = 2\n
\nThis is assuming that there won't be duplicate records with same (recipe_id, ingredient_id) tuple (better ensured with a UNIQUE constraint).
\n
soup wrap:
Since a recipe can use multiple ingredients and you are looking for recipes that use one or more of the ingredients specified, you should use the DISTINCT keyword to prevent duplicate results where a recipe is using more than one ingredient from the list specified. Also, you can use IN clause to filter on multiple ingredient IDs.
select DISTINCT r.name
from
recipes r
inner join ingredient_index i
on i.recipe_id = r.recipe_id
where i.ingredient_id IN (7, 5);
Alternatively, if you are looking for recipes that are using all the ingredients specified in the list, then you can group the results by recipe name and check if the count of records is same as the number of ingredients in your list.
select r.name
from
recipes r
inner join ingredient_index i
on i.recipe_id = r.recipe_id
where i.ingredient_id IN (7, 5)
GROUP BY r.name
HAVING COUNT(*) = 2
This is assuming that there won't be duplicate records with same (recipe_id, ingredient_id) tuple (better ensured with a UNIQUE constraint).
qid & accept id:
(13452415, 13452448)
query:
SQL Server 2008 insert into table using loop
soup:
\nI want to use the IDs I get from this query to insert in another table\n Member which use ContaId as a foreign key.
\n
\nYou can use INSERT INTO .. SELECT instead of cursors and while loops like so:
\nINSERT INTO Member(ContaId)\nSELECT TOP 1000 c.ContaId\nFROM FastGroupe fg\nINNER JOIN FastParticipant fp \n ON fg.FastGroupeId = fp.FastGroupeId\nINNER JOIN Participant p\n ON fp.ParticipantId = p.ParticipantId\nINNER JOIN Contact c\n ON p.ContaId = c.ContaId\nWHERE FastGroupeName like '%Group%'\n
\nUpdate: Try this:
\nINSERT INTO Member(ContaId, PromoId)\nSELECT TOP 1000 c.ContaId, 91 AS PromoId\nFROM FastGroupe fg\n...\n
\nThis will insert the same value 91 for the PromoId for all the 1000 records. And since the MemberId is set to be automatic, just ignore it in the columns' list and it will get an automatic value.
\n
soup wrap:
I want to use the IDs I get from this query to insert in another table
Member which use ContaId as a foreign key.
You can use INSERT INTO .. SELECT instead of cursors and while loops like so:
INSERT INTO Member(ContaId)
SELECT TOP 1000 c.ContaId
FROM FastGroupe fg
INNER JOIN FastParticipant fp
ON fg.FastGroupeId = fp.FastGroupeId
INNER JOIN Participant p
ON fp.ParticipantId = p.ParticipantId
INNER JOIN Contact c
ON p.ContaId = c.ContaId
WHERE FastGroupeName like '%Group%'
Update: Try this:
INSERT INTO Member(ContaId, PromoId)
SELECT TOP 1000 c.ContaId, 91 AS PromoId
FROM FastGroupe fg
...
This will insert the same value 91 for the PromoId for all the 1000 records. And since the MemberId is set to be automatic, just ignore it in the columns' list and it will get an automatic value.
qid & accept id:
(13471159, 13471249)
query:
Combine multiple rows of table in single row in SQL
soup:
You can use FOR XML PATH:
\nSELECT Ticket, \n STUFF((SELECT distinct ' - ' + cast(UpdatedBy as varchar(20)) + ' ' + comment\n from yourtable t2\n where t1.Ticket = t2.Ticket\n FOR XML PATH(''), TYPE\n\n ).value('.', 'NVARCHAR(MAX)') \n ,1,2,'') comments\nfrom yourtable t1\ngroup by ticket\n
\n\nResult:
\n| TICKET | COMMENTS |\n-----------------------------------------------------------\n| 100 | 23 Text 1 - 24 Text 2 - 25 Text 3 - 26 Text 4 |\n
\n
soup wrap:
You can use FOR XML PATH:
SELECT Ticket,
STUFF((SELECT distinct ' - ' + cast(UpdatedBy as varchar(20)) + ' ' + comment
from yourtable t2
where t1.Ticket = t2.Ticket
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,2,'') comments
from yourtable t1
group by ticket
Result:
| TICKET | COMMENTS |
-----------------------------------------------------------
| 100 | 23 Text 1 - 24 Text 2 - 25 Text 3 - 26 Text 4 |
qid & accept id:
(13474207, 13474490)
query:
sql query if parameter is null select all
soup:
You can also use functions IFNULL,COALESCE,NVL,ISNULL to check null value. It depends on your RDBMS.
\nMySQL:
\nSELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = IFNULL(?,NAME);\n
\nor
\nSELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = COALESCE(?,NAME);\n
\nORACLE:
\nSELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = NVL(?,NAME);\n
\nSQL Server / SYBASE:
\nSELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = ISNULL(?,NAME);\n
\n
soup wrap:
You can also use functions IFNULL,COALESCE,NVL,ISNULL to check null value. It depends on your RDBMS.
MySQL:
SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = IFNULL(?,NAME);
or
SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = COALESCE(?,NAME);
ORACLE:
SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = NVL(?,NAME);
SQL Server / SYBASE:
SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = ISNULL(?,NAME);
qid & accept id:
(13523272, 13524149)
query:
SQL Server Group Concat with Different characters
soup:
\nmake a more human readable solution
\n
\nSorry, this is the best I can do with your requirement.
\n\nMS SQL Server 2008 Schema Setup:
\ncreate table YourTable\n(\n ParentID int,\n ChildName varchar(10)\n);\n\ninsert into YourTable values\n(1, 'Max'),\n(1, 'Jessie'),\n(2, 'Steven'),\n(2, 'Lucy'),\n(2, 'Jake'),\n(3, 'Mark');\n
\nQuery 1:
\nwith T as \n(\n select ParentID,\n ChildName,\n row_number() over(partition by ParentID order by ChildName) as rn,\n count(*) over(partition by ParentID) as cc\n from YourTable\n)\nselect T1.ParentID,\n (\n select case\n when T2.rn = 1 and T2.cc > 1 then ' and '\n else ', ' \n end + T2.ChildName\n from T as T2\n where T1.ParentID = T2.ParentID\n order by T2.rn desc\n for xml path(''), type\n ).value('substring(text()[1], 3)', 'varchar(max)') as ChildNames\nfrom T as T1\ngroup by T1.ParentID\n
\n\n| PARENTID | CHILDNAMES |\n------------------------------------\n| 1 | Max and Jessie |\n| 2 | Steven, Lucy and Jake |\n| 3 | Mark |\n
\n
soup wrap:
make a more human readable solution
Sorry, this is the best I can do with your requirement.
MS SQL Server 2008 Schema Setup:
create table YourTable
(
ParentID int,
ChildName varchar(10)
);
insert into YourTable values
(1, 'Max'),
(1, 'Jessie'),
(2, 'Steven'),
(2, 'Lucy'),
(2, 'Jake'),
(3, 'Mark');
Query 1:
with T as
(
select ParentID,
ChildName,
row_number() over(partition by ParentID order by ChildName) as rn,
count(*) over(partition by ParentID) as cc
from YourTable
)
select T1.ParentID,
(
select case
when T2.rn = 1 and T2.cc > 1 then ' and '
else ', '
end + T2.ChildName
from T as T2
where T1.ParentID = T2.ParentID
order by T2.rn desc
for xml path(''), type
).value('substring(text()[1], 3)', 'varchar(max)') as ChildNames
from T as T1
group by T1.ParentID
| PARENTID | CHILDNAMES |
------------------------------------
| 1 | Max and Jessie |
| 2 | Steven, Lucy and Jake |
| 3 | Mark |
qid & accept id:
(13537347, 13537369)
query:
Get row where column2 is X and column1 is max of column1
soup:
SELECT * FROM table WHERE col2='CDE' ORDER BY col1 DESC LIMIT 1\n
\nin case if col1 wasn't an increment it would go somewhat like
\nSELECT *,MAX(col1) AS max_col1 FROM table WHERE col2='CDE' GROUP BY col2 LIMIT 1\n
\n
soup wrap:
SELECT * FROM table WHERE col2='CDE' ORDER BY col1 DESC LIMIT 1
in case if col1 wasn't an increment it would go somewhat like
SELECT *,MAX(col1) AS max_col1 FROM table WHERE col2='CDE' GROUP BY col2 LIMIT 1
qid & accept id:
(13545617, 13545670)
query:
Reference from one table to another entire table and specified row
soup:
I assume you use mysql database.
\nCREATE TABLE A\n(\n id INT NOT NULL PRIMARY KEY,\n b_id INT NOT NULL,\n c_id INT NOT NULL,\n FOREIGN KEY (b_id) REFERENCES B (id),\n FOREIGN KEY (c_id) REFERENCES C (id)\n) TYPE = INNODB;\n
\nUpdate for using postgresql:
\nCREATE TABLE "A"\n(\n id integer NOT NULL, \n b_id integer NOT NULL, \n c_id integer NOT NULL, \n CONSTRAINT id PRIMARY KEY (id), \n CONSTRAINT b_id FOREIGN KEY (b_id) REFERENCES "B" (id) \n ON UPDATE NO ACTION ON DELETE NO ACTION, --with no action restriction\n CONSTRAINT c_id FOREIGN KEY (c_id) REFERENCES "C" (id) \n ON UPDATE CASCADE ON DELETE CASCADE --with cascade restriction\n) \nWITH (\n OIDS = FALSE\n)\n;\nALTER TABLE "C" OWNER TO postgres;\n
\n
\n
soup wrap:
I assume you use mysql database.
CREATE TABLE A
(
id INT NOT NULL PRIMARY KEY,
b_id INT NOT NULL,
c_id INT NOT NULL,
FOREIGN KEY (b_id) REFERENCES B (id),
FOREIGN KEY (c_id) REFERENCES C (id)
) TYPE = INNODB;
Update for using postgresql:
CREATE TABLE "A"
(
id integer NOT NULL,
b_id integer NOT NULL,
c_id integer NOT NULL,
CONSTRAINT id PRIMARY KEY (id),
CONSTRAINT b_id FOREIGN KEY (b_id) REFERENCES "B" (id)
ON UPDATE NO ACTION ON DELETE NO ACTION, --with no action restriction
CONSTRAINT c_id FOREIGN KEY (c_id) REFERENCES "C" (id)
ON UPDATE CASCADE ON DELETE CASCADE --with cascade restriction
)
WITH (
OIDS = FALSE
)
;
ALTER TABLE "C" OWNER TO postgres;
qid & accept id:
(13584250, 13584444)
query:
SQL using listagg() and group by non duplicated values
soup:
Query:
\n\nSELECT \nID, LISTAGG(TELNO, ', ') \nWITHIN GROUP (ORDER BY TELNO) \nAS TEL_LIST\nFROM tbl\nGROUP BY ID;\n
\nResult:
\n| ID | TEL_LIST |\n---------------------------------------------\n| 1 | 0123456789, 0207983498 |\n| 2 | 0124339848, 02387694364, 09348374834 |\n
\n
soup wrap:
Query:
SELECT
ID, LISTAGG(TELNO, ', ')
WITHIN GROUP (ORDER BY TELNO)
AS TEL_LIST
FROM tbl
GROUP BY ID;
Result:
| ID | TEL_LIST |
---------------------------------------------
| 1 | 0123456789, 0207983498 |
| 2 | 0124339848, 02387694364, 09348374834 |
qid & accept id:
(13595333, 13595976)
query:
How copy data from one database to another on different server?
soup:
Use Oracle export to export a whole table to a file, copy the file to serverB and import.
\nhttp://www.orafaq.com/wiki/Import_Export_FAQ\n
\nYou can use rsync to sync an oracle .dbf file or files to another server. This has problems and syncing all files works more reliably.
\nFor groups of records, write a query to build a pipe-delimited (or whatever delimiter suits your data) file with rows you need to move. Copy that file to serverB. Write a control file for sqlldr and use sqlldr to load the rows into the table. sqlldr is part of the oracle installation.
\nhttp://www.thegeekstuff.com/2012/06/oracle-sqlldr/\n
\nIf you have db listeners up on each server and tnsnames knows about both, you can directly:
\ninsert into mytable@remote \nselect * from mytable\n where somecolumn=somevalue;\n
\nLook at the remote table section:
\nhttp://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm\n
\nIf this is going to be an ongoing thing, create a db link from instance@serverA to instance@serverB.\nYou can then do anything you have permissions for with data on one instance or the other or both.
\nhttp://psoug.org/definition/CREATE_DATABASE_LINK.htm\n
\n
soup wrap:
Use Oracle export to export a whole table to a file, copy the file to serverB and import.
http://www.orafaq.com/wiki/Import_Export_FAQ
You can use rsync to sync an oracle .dbf file or files to another server. This has problems and syncing all files works more reliably.
For groups of records, write a query to build a pipe-delimited (or whatever delimiter suits your data) file with rows you need to move. Copy that file to serverB. Write a control file for sqlldr and use sqlldr to load the rows into the table. sqlldr is part of the oracle installation.
http://www.thegeekstuff.com/2012/06/oracle-sqlldr/
If you have db listeners up on each server and tnsnames knows about both, you can directly:
insert into mytable@remote
select * from mytable
where somecolumn=somevalue;
Look at the remote table section:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm
If this is going to be an ongoing thing, create a db link from instance@serverA to instance@serverB.
You can then do anything you have permissions for with data on one instance or the other or both.
http://psoug.org/definition/CREATE_DATABASE_LINK.htm
qid & accept id:
(13618078, 13618305)
query:
Query where foreign key column can be NULL
soup:
If there is "no row at all for the uid", and you JOIN like you do, you get no row as result. Use LEFT [OUTER] JOIN instead:
\nSELECT u.uid, u.fname, u.lname\nFROM u \nLEFT JOIN u_org o ON u.uid = o.uid \nLEFT JOIN login l ON u.uid = l.uid \nWHERE (o.orgid = 2 OR o.orgid IS NULL)\nAND l.access IS DISTINCT FROM 4;\n
\nAlso, you need the parenthesis I added because of operator precedence. (AND binds before OR).
\nI use IS DISTINCT FROM instead of != in the last WHERE condition because, again, login.access might be NULL, which would not qualify.
\nHowever, since you only seem to be interested in columns from table u to begin with, this alternative query would be more elegant:
\nSELECT u.uid, u.fname, u.lname\nFROM u\nWHERE (u.uid IS NULL OR EXISTS (\n SELECT 1\n FROM u_org o\n WHERE o.uid = u.uid\n AND o.orgid = 2\n ))\nAND NOT EXISTS (\n SELECT 1\n FROM login l\n WHERE l.uid = u.uid\n AND l.access = 4\n );\n
\nThis alternative has the additional advantage, that you always get one row from u, even if there are multiple rows in u_org or login.
\n
soup wrap:
If there is "no row at all for the uid", and you JOIN like you do, you get no row as result. Use LEFT [OUTER] JOIN instead:
SELECT u.uid, u.fname, u.lname
FROM u
LEFT JOIN u_org o ON u.uid = o.uid
LEFT JOIN login l ON u.uid = l.uid
WHERE (o.orgid = 2 OR o.orgid IS NULL)
AND l.access IS DISTINCT FROM 4;
Also, you need the parenthesis I added because of operator precedence. (AND binds before OR).
I use IS DISTINCT FROM instead of != in the last WHERE condition because, again, login.access might be NULL, which would not qualify.
However, since you only seem to be interested in columns from table u to begin with, this alternative query would be more elegant:
SELECT u.uid, u.fname, u.lname
FROM u
WHERE (u.uid IS NULL OR EXISTS (
SELECT 1
FROM u_org o
WHERE o.uid = u.uid
AND o.orgid = 2
))
AND NOT EXISTS (
SELECT 1
FROM login l
WHERE l.uid = u.uid
AND l.access = 4
);
This alternative has the additional advantage, that you always get one row from u, even if there are multiple rows in u_org or login.
qid & accept id:
(13632163, 13632683)
query:
Create a view with alternate/default values for missing relationships
soup:
Is this along the right lines for what you're after?
\nRunnable example here: http://sqlfiddle.com/#!3/894e9/4
\nif object_id('[FloorName]') is not null drop table [FloorName]\nif object_id('[BuildingName]') is not null drop table [BuildingName]\nif object_id('[Floor]') is not null drop table [Floor]\nif object_id('[Building]') is not null drop table [Building]\nif object_id('[Language]') is not null drop table [Language]\n\ncreate table [Language]\n(\n Id bigint not null identity(1,1) primary key clustered\n , code nvarchar(5)\n)\ncreate table [Building]\n(\n Id bigint not null identity(1,1) primary key clustered\n , something nvarchar(64)\n)\ncreate table [Floor]\n(\n Id bigint not null identity(1,1) primary key clustered\n , BuildingId bigint foreign key references [Building](Id)\n , something nvarchar(64)\n)\ncreate table [BuildingName]\n(\n Id bigint not null identity(1,1) primary key clustered\n , BuildingId bigint foreign key references [Building](Id)\n , LanguageId bigint foreign key references [Language](Id)\n , name nvarchar(64)\n)\ncreate table [FloorName]\n(\n Id bigint not null identity(1,1) primary key clustered\n , FloorId bigint foreign key references [Floor](Id)\n , LanguageId bigint foreign key references [Language](Id)\n , name nvarchar(64)\n)\n\ninsert [Language]\n select 'en-us'\nunion select 'en-gb'\nunion select 'fr'\n\ninsert [Building]\n select 'B1'\nunion select 'B2'\n\ninsert [Floor]\n select 1, 'F1.1'\nunion select 1, 'F1.2'\nunion select 1, 'F1.3'\nunion select 1, 'F1.4'\nunion select 1, 'F1.5'\nunion select 2, 'F2.1'\nunion select 2, 'F2.2'\nunion select 2, 'F2.3'\nunion select 2, 'F2.4'\nunion select 2, 'F2.5'\n\ninsert BuildingName\nselect b.Id\n, l.id\n, 'BuildingName :: ' + b.something + ' ' + l.code\nfrom [Building] b\ncross join [Language] l\nwhere l.code in ('en-us', 'fr')\n\ninsert FloorName\nselect f.Id\n, l.Id\n, 'FloorName :: ' + f.something + ' ' + l.code\nfrom [Floor] f\ncross join [Language] l\nwhere f.something in ( 'F1.1', 'F1.2', 'F2.1')\nand l.code in ('en-us', 'fr')\n\ninsert FloorName\nselect f.Id\n, l.Id\n, 'FloorName :: ' + f.something + ' ' + l.code\nfrom [Floor] f\ncross join [Language] l\nwhere f.something not in ( 'F1.1', 'F1.2', 'F2.1')\nand l.code in ('en-us')\n\n\ndeclare @defaultLanguageId bigint\nselect @defaultLanguageId = id from [Language] where code = 'en-us' --default language is US English\n\nselect b.Id\n, b.something\n, bn.name\n, isnull(bfn.name, bfnDefault.name)\n, bl.code BuildingLanguage\nfrom [Building] b\ninner join [BuildingName] bn\n on bn.BuildingId = b.Id\ninner join [Language] bl\n on bl.Id = bn.LanguageId\ninner join [Floor] bf\n on bf.BuildingId = b.Id\nleft outer join [FloorName] bfn\n on bfn.FloorId = bf.Id\n and bfn.LanguageId = bl.Id\nleft outer join [Language] bfl\n on bfl.Id = bfn.LanguageId\nleft outer join [FloorName] bfnDefault\n on bfnDefault.FloorId = bf.Id\n and bfnDefault.LanguageId = @defaultLanguageId\n
\nEDIT
\nThis version defaults any language:
\nselect b.Id\n, b.something\n, bn.name\n, isnull(bfn.name, (select top 1 name from [FloorName] x where x.FloorId=bf.Id))\n, bl.code BuildingLanguage\nfrom [Building] b\ninner join [BuildingName] bn\n on bn.BuildingId = b.Id\ninner join [Language] bl\n on bl.Id = bn.LanguageId\ninner join [Floor] bf\n on bf.BuildingId = b.Id\nleft outer join [FloorName] bfn\n on bfn.FloorId = bf.Id\n and bfn.LanguageId = bl.Id\nleft outer join [Language] bfl\n on bfl.Id = bfn.LanguageId\n
\n
soup wrap:
Is this along the right lines for what you're after?
Runnable example here: http://sqlfiddle.com/#!3/894e9/4
if object_id('[FloorName]') is not null drop table [FloorName]
if object_id('[BuildingName]') is not null drop table [BuildingName]
if object_id('[Floor]') is not null drop table [Floor]
if object_id('[Building]') is not null drop table [Building]
if object_id('[Language]') is not null drop table [Language]
create table [Language]
(
Id bigint not null identity(1,1) primary key clustered
, code nvarchar(5)
)
create table [Building]
(
Id bigint not null identity(1,1) primary key clustered
, something nvarchar(64)
)
create table [Floor]
(
Id bigint not null identity(1,1) primary key clustered
, BuildingId bigint foreign key references [Building](Id)
, something nvarchar(64)
)
create table [BuildingName]
(
Id bigint not null identity(1,1) primary key clustered
, BuildingId bigint foreign key references [Building](Id)
, LanguageId bigint foreign key references [Language](Id)
, name nvarchar(64)
)
create table [FloorName]
(
Id bigint not null identity(1,1) primary key clustered
, FloorId bigint foreign key references [Floor](Id)
, LanguageId bigint foreign key references [Language](Id)
, name nvarchar(64)
)
insert [Language]
select 'en-us'
union select 'en-gb'
union select 'fr'
insert [Building]
select 'B1'
union select 'B2'
insert [Floor]
select 1, 'F1.1'
union select 1, 'F1.2'
union select 1, 'F1.3'
union select 1, 'F1.4'
union select 1, 'F1.5'
union select 2, 'F2.1'
union select 2, 'F2.2'
union select 2, 'F2.3'
union select 2, 'F2.4'
union select 2, 'F2.5'
insert BuildingName
select b.Id
, l.id
, 'BuildingName :: ' + b.something + ' ' + l.code
from [Building] b
cross join [Language] l
where l.code in ('en-us', 'fr')
insert FloorName
select f.Id
, l.Id
, 'FloorName :: ' + f.something + ' ' + l.code
from [Floor] f
cross join [Language] l
where f.something in ( 'F1.1', 'F1.2', 'F2.1')
and l.code in ('en-us', 'fr')
insert FloorName
select f.Id
, l.Id
, 'FloorName :: ' + f.something + ' ' + l.code
from [Floor] f
cross join [Language] l
where f.something not in ( 'F1.1', 'F1.2', 'F2.1')
and l.code in ('en-us')
declare @defaultLanguageId bigint
select @defaultLanguageId = id from [Language] where code = 'en-us' --default language is US English
select b.Id
, b.something
, bn.name
, isnull(bfn.name, bfnDefault.name)
, bl.code BuildingLanguage
from [Building] b
inner join [BuildingName] bn
on bn.BuildingId = b.Id
inner join [Language] bl
on bl.Id = bn.LanguageId
inner join [Floor] bf
on bf.BuildingId = b.Id
left outer join [FloorName] bfn
on bfn.FloorId = bf.Id
and bfn.LanguageId = bl.Id
left outer join [Language] bfl
on bfl.Id = bfn.LanguageId
left outer join [FloorName] bfnDefault
on bfnDefault.FloorId = bf.Id
and bfnDefault.LanguageId = @defaultLanguageId
EDIT
This version defaults any language:
select b.Id
, b.something
, bn.name
, isnull(bfn.name, (select top 1 name from [FloorName] x where x.FloorId=bf.Id))
, bl.code BuildingLanguage
from [Building] b
inner join [BuildingName] bn
on bn.BuildingId = b.Id
inner join [Language] bl
on bl.Id = bn.LanguageId
inner join [Floor] bf
on bf.BuildingId = b.Id
left outer join [FloorName] bfn
on bfn.FloorId = bf.Id
and bfn.LanguageId = bl.Id
left outer join [Language] bfl
on bfl.Id = bfn.LanguageId
qid & accept id:
(13678718, 13679093)
query:
Execute a WHERE clause before another one
soup:
6 answers and 5 of them don't work (for SQL Server)...
\nSELECT *\n FROM foo\n WHERE CASE WHEN LEN(bar) = 4 THEN\n CASE WHEN CONVERT(Int,bar) >= 5000 THEN 1 ELSE 0 END\n END = 1;\n
\nThe WHERE/INNER JOIN conditions can be executed in any order that the query optimizer determines is best. There is no short-circuit boolean evaluation.
\nSpecifically for your question, since you KNOW that the data with 4-characters is a number, then you can do a direct lexicographical (text) comparison (yes it works):
\nSELECT *\n FROM foo\n WHERE LEN(bar) = 4 AND bar > '5000';\n
\n
soup wrap:
6 answers and 5 of them don't work (for SQL Server)...
SELECT *
FROM foo
WHERE CASE WHEN LEN(bar) = 4 THEN
CASE WHEN CONVERT(Int,bar) >= 5000 THEN 1 ELSE 0 END
END = 1;
The WHERE/INNER JOIN conditions can be executed in any order that the query optimizer determines is best. There is no short-circuit boolean evaluation.
Specifically for your question, since you KNOW that the data with 4-characters is a number, then you can do a direct lexicographical (text) comparison (yes it works):
SELECT *
FROM foo
WHERE LEN(bar) = 4 AND bar > '5000';
qid & accept id:
(13717630, 13719535)
query:
Choose view select statement dynamically by session variable in PostgreSQL
soup:
Try something like this:
\nSELECT 'A'\nFROM tableA\nWHERE current_setting(setting_name) = 'setting A'\nUNION ALL\nSELECT 'B'\nFROM tableB\nWHERE current_setting(setting_name) = 'setting B'\n
\nDetails on postgresql session variables here.
\nUPD It will give the results of one of the SELECT. If current_setting(setting_name) equals to 'setting A' the first query will return the results, but the second wont.
\nFor your example the query will look like:
\nSELECT 'A'\nFROM tableA\nWHERE myVar = 1\nUNION ALL\nSELECT 'B'\nFROM tableB\nWHERE myVar != 1\n
\nUPD Checked: postgres executes only one of the queries. EXPLAIN ANALYZE shows that the second query was planned but marked as (never executes).
\n
soup wrap:
Try something like this:
SELECT 'A'
FROM tableA
WHERE current_setting(setting_name) = 'setting A'
UNION ALL
SELECT 'B'
FROM tableB
WHERE current_setting(setting_name) = 'setting B'
Details on postgresql session variables here.
UPD It will give the results of one of the SELECT. If current_setting(setting_name) equals to 'setting A' the first query will return the results, but the second wont.
For your example the query will look like:
SELECT 'A'
FROM tableA
WHERE myVar = 1
UNION ALL
SELECT 'B'
FROM tableB
WHERE myVar != 1
UPD Checked: postgres executes only one of the queries. EXPLAIN ANALYZE shows that the second query was planned but marked as (never executes).
qid & accept id:
(13730484, 13731188)
query:
SELECT multiple rows from single column into single row
soup:
You would use FOR XML PATH for this:
\nselect p.name,\n Stuff((SELECT ', ' + s.skillName \n FROM skilllink l\n left join skill s\n on l.skillid = s.id \n where p.id = l.personid\n FOR XML PATH('')),1,1,'') Skills\nfrom person p\n
\n\nResult:
\n| NAME | SKILLS |\n----------------------------\n| Bill | Telepathy, Karate |\n| Bob | (null) |\n| Jim | Carpentry |\n
\n
soup wrap:
You would use FOR XML PATH for this:
select p.name,
Stuff((SELECT ', ' + s.skillName
FROM skilllink l
left join skill s
on l.skillid = s.id
where p.id = l.personid
FOR XML PATH('')),1,1,'') Skills
from person p
Result:
| NAME | SKILLS |
----------------------------
| Bill | Telepathy, Karate |
| Bob | (null) |
| Jim | Carpentry |
qid & accept id:
(13758033, 13758855)
query:
how to join multiple select statement together
soup:
I guess you need this?
\nselect * from mastertable\nleft join carcolortable on mastertable.carcolor=carcolortable.id\nleft join varianttable on mastertable.variant=varianttable.id\nleft join accessoriestable on mastertable.accessories=accessoriestable.id\n
\nIf as you've described in the comment mastertable.carcolor (and others) contains a comma separated list of Id's in varchar then it should be:
\nselect * from mastertable\nleft join carcolortable on \n ( ','+mastertable.carcolor+',' \n LIKE \n '%,'+CAST(carcolortable.id as varchar(100))+',%'\n )\nleft join varianttable on \n ( ','+mastertable.variant+',' \n LIKE \n '%,'+CAST(varianttable.id as varchar(100))+',%'\n )\n\nleft join accessoriestable on \n ( ','+mastertable.accessories+',' \n LIKE \n '%,'+CAST(accessoriestable.id as varchar(100))+',%'\n )\n
\n
soup wrap:
I guess you need this?
select * from mastertable
left join carcolortable on mastertable.carcolor=carcolortable.id
left join varianttable on mastertable.variant=varianttable.id
left join accessoriestable on mastertable.accessories=accessoriestable.id
If as you've described in the comment mastertable.carcolor (and others) contains a comma separated list of Id's in varchar then it should be:
select * from mastertable
left join carcolortable on
( ','+mastertable.carcolor+','
LIKE
'%,'+CAST(carcolortable.id as varchar(100))+',%'
)
left join varianttable on
( ','+mastertable.variant+','
LIKE
'%,'+CAST(varianttable.id as varchar(100))+',%'
)
left join accessoriestable on
( ','+mastertable.accessories+','
LIKE
'%,'+CAST(accessoriestable.id as varchar(100))+',%'
)
qid & accept id:
(13771275, 13771834)
query:
Query different IDs with different values?
soup:
For a list of players without duplicates an EXISTS semi-join is probably best:
\nSELECT playerFirstName, playerLastName\nFROM player AS p \nWHERE EXISTS (\n SELECT 1\n FROM player2Statistic AS ps \n WHERE ps.playerID = p.playerID\n AND ps.StatisticID = 1\n AND ps.p2sStatistic > 65\n )\nAND EXISTS (\n SELECT 1\n FROM player2Statistic AS ps \n WHERE ps.playerID = p.playerID\n AND ps.StatisticID = 3\n AND ps.p2sStatistic > 295\n );\n
\nColumn names and context are derived from the provided screenshots. The query in the question does not quite cover it.
\nNote the parenthesis, they are needed to cope with operator precedence.
\nThis is probably faster (duplicates are probably not possible):
\nSELECT p.playerFirstName, p.playerLastName\nFROM player AS p \nJOIN player2Statistic AS ps1 USING (playerID)\nJOIN player2Statistic AS ps3 USING (playerID)\nAND ps1.StatisticID = 1\nAND ps1.p2sStatistic > 65\nAND ps3.StatisticID = 3\nAND ps3.p2sStatistic > 295;\n
\nIf your top-secret brand of RDBMS does not support the SQL-standard (USING (playerID), substitute: ON ps1.playerID = p.playerID to the same effect.
\nIt's a case of relational division. Find many more query techniques to deal with it under this related question:
\nHow to filter SQL results in a has-many-through relation
\n
soup wrap:
For a list of players without duplicates an EXISTS semi-join is probably best:
SELECT playerFirstName, playerLastName
FROM player AS p
WHERE EXISTS (
SELECT 1
FROM player2Statistic AS ps
WHERE ps.playerID = p.playerID
AND ps.StatisticID = 1
AND ps.p2sStatistic > 65
)
AND EXISTS (
SELECT 1
FROM player2Statistic AS ps
WHERE ps.playerID = p.playerID
AND ps.StatisticID = 3
AND ps.p2sStatistic > 295
);
Column names and context are derived from the provided screenshots. The query in the question does not quite cover it.
Note the parenthesis, they are needed to cope with operator precedence.
This is probably faster (duplicates are probably not possible):
SELECT p.playerFirstName, p.playerLastName
FROM player AS p
JOIN player2Statistic AS ps1 USING (playerID)
JOIN player2Statistic AS ps3 USING (playerID)
AND ps1.StatisticID = 1
AND ps1.p2sStatistic > 65
AND ps3.StatisticID = 3
AND ps3.p2sStatistic > 295;
If your top-secret brand of RDBMS does not support the SQL-standard (USING (playerID), substitute: ON ps1.playerID = p.playerID to the same effect.
It's a case of relational division. Find many more query techniques to deal with it under this related question:
How to filter SQL results in a has-many-through relation
qid & accept id:
(13789442, 13791342)
query:
List all the jobs that have been executed within a specified date?
soup:
To list all the jobs that started within a specified date:
\ndeclare @date date = getdate()\n\nSELECT\n J.job_id,\n J.name\nFROM msdb.dbo.sysjobs AS J \nINNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id\nWHERE run_date = CONVERT(VARCHAR(8), GETDATE(), 112)\nGROUP BY J.job_id, J.name\n
\nTo list all the steps for a specified job on a specified date with their status:
\ndeclare @date date = getdate()\ndeclare @job_name varchar(50) = 'test'\n\nSELECT\n H.run_date,\n H.run_time,\n H.step_id,\n H.step_name,\n H.run_status\nFROM msdb.dbo.sysjobs AS J\nINNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id\nWHERE \n run_date = CONVERT(VARCHAR(8), GETDATE(), 112)\n AND J.name = @job_name\n
\nMore information here.
\n
soup wrap:
To list all the jobs that started within a specified date:
declare @date date = getdate()
SELECT
J.job_id,
J.name
FROM msdb.dbo.sysjobs AS J
INNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id
WHERE run_date = CONVERT(VARCHAR(8), GETDATE(), 112)
GROUP BY J.job_id, J.name
To list all the steps for a specified job on a specified date with their status:
declare @date date = getdate()
declare @job_name varchar(50) = 'test'
SELECT
H.run_date,
H.run_time,
H.step_id,
H.step_name,
H.run_status
FROM msdb.dbo.sysjobs AS J
INNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id
WHERE
run_date = CONVERT(VARCHAR(8), GETDATE(), 112)
AND J.name = @job_name
More information here.
qid & accept id:
(13791170, 13791263)
query:
How do I join tables where a column has exactly all values that I want?
soup:
Try this
\nSELECT\n whatever\nFROM\n A\n INNER JOIN B\n ON A.A_ID = B.A_ID\nWHERE\n B.C_ID IN (4, 5)\n
\nor
\nSELECT\n whatever\nFROM\n A\n INNER JOIN B\n ON A.A_ID = B.A_ID\nWHERE\n B.C_ID = 4 OR B.C_ID = 5\n
\n
\nUPDATE
\nIf you want only matching pairs
\nSELECT\n whatever\nFROM\n A\n INNER JOIN B\n ON A.A_ID = B.A_ID\nWHERE\n A.A_ID IN (SELECT A_ID\n FROM B\n WHERE C_ID IN (4, 5)\n GROUP BY A_ID\n HAVING COUNT(*) = 2) AND\n B.C_ID IN (4, 5)\n
\nThe sub-select groups by A_ID and counts the records. The HAVING clause works like the WHERE clause but is executed after grouping. So the inner select returns only A_IDs corresponding to (4, 5)-pairs of C_ID. The whole query always returns an even number of records like
\n\nA_ID | B_ID | C_ID\n 1 | 1 | 4\n 1 | 2 | 5\n 2 | 3 | 4\n 2 | 4 | 5\n
\n
\nEDIT
\nIf you only want A_IDs where not only C_IDs 4 and 5 are present but where no further C_IDs exist then change the query to
\nSELECT B.*\nFROM A INNER JOIN B ON A.A_ID = B.A_ID\nWHERE B.C_ID IN (4, 5) AND\n A.A_ID IN (SELECT A_ID\n FROM B\n GROUP BY A_ID\n HAVING MIN(C_ID)=4 AND MAX(C_ID)=5 AND COUNT(*)=2)\n
\nIf the two numbers (4 and 5 in this example) are always contiguous, you can drop the COUNT(*)=2 part.
\n(Note: accoring to one of your comments the join is on the A_ID column. I changed that in all my examples.)
\nUPDATE by Robin
\nThanks, with your help I came up with this:
\nSELECT\n *\nFROM\n A a\n INNER JOIN B\n ON a.A_ID = B.A_ID\nWHERE\n (SELECT COUNT(*) FROM B b WHERE b.A_ID = a.A_ID and C_ID IN (4, 5)) =\n (SELECT COUNT(*) FROM A aa INNER JOIN B b ON aa.A_ID = b.A_ID WHERE b.A_ID = a.A_ID)\n
\n
soup wrap:
Try this
SELECT
whatever
FROM
A
INNER JOIN B
ON A.A_ID = B.A_ID
WHERE
B.C_ID IN (4, 5)
or
SELECT
whatever
FROM
A
INNER JOIN B
ON A.A_ID = B.A_ID
WHERE
B.C_ID = 4 OR B.C_ID = 5
UPDATE
If you want only matching pairs
SELECT
whatever
FROM
A
INNER JOIN B
ON A.A_ID = B.A_ID
WHERE
A.A_ID IN (SELECT A_ID
FROM B
WHERE C_ID IN (4, 5)
GROUP BY A_ID
HAVING COUNT(*) = 2) AND
B.C_ID IN (4, 5)
The sub-select groups by A_ID and counts the records. The HAVING clause works like the WHERE clause but is executed after grouping. So the inner select returns only A_IDs corresponding to (4, 5)-pairs of C_ID. The whole query always returns an even number of records like
A_ID | B_ID | C_ID
1 | 1 | 4
1 | 2 | 5
2 | 3 | 4
2 | 4 | 5
EDIT
If you only want A_IDs where not only C_IDs 4 and 5 are present but where no further C_IDs exist then change the query to
SELECT B.*
FROM A INNER JOIN B ON A.A_ID = B.A_ID
WHERE B.C_ID IN (4, 5) AND
A.A_ID IN (SELECT A_ID
FROM B
GROUP BY A_ID
HAVING MIN(C_ID)=4 AND MAX(C_ID)=5 AND COUNT(*)=2)
If the two numbers (4 and 5 in this example) are always contiguous, you can drop the COUNT(*)=2 part.
(Note: accoring to one of your comments the join is on the A_ID column. I changed that in all my examples.)
UPDATE by Robin
Thanks, with your help I came up with this:
SELECT
*
FROM
A a
INNER JOIN B
ON a.A_ID = B.A_ID
WHERE
(SELECT COUNT(*) FROM B b WHERE b.A_ID = a.A_ID and C_ID IN (4, 5)) =
(SELECT COUNT(*) FROM A aa INNER JOIN B b ON aa.A_ID = b.A_ID WHERE b.A_ID = a.A_ID)
qid & accept id:
(13832037, 13832050)
query:
MySQL: Select values based on current month and day
soup:
SELECT *\nFROM History\nWHERE DATE_FORMAT(CURDATE(), '%M') = `month` AND\n DAY(CURDATE()) = `day_num`\n
\n\n- SQLFiddle Demo
\n
\nOR
\nSELECT *\nFROM History\nWHERE MONTHNAME(CURDATE()) = `month` AND\n DAY(CURDATE()) = `day_num`\n
\n\n- SQLFiddle Demo
\n
\nOther Sources
\n\n
soup wrap:
SELECT *
FROM History
WHERE DATE_FORMAT(CURDATE(), '%M') = `month` AND
DAY(CURDATE()) = `day_num`
OR
SELECT *
FROM History
WHERE MONTHNAME(CURDATE()) = `month` AND
DAY(CURDATE()) = `day_num`
Other Sources
qid & accept id:
(13840468, 13840517)
query:
SQL query for fetching a single record in format "column heading: column value"
soup:
You can use the UNPIVOT function to do this, the version below concatenates the column name and value together, but you can always display them as separate columns:
\nselect col+':'+cast(value as varchar(10)) col\nfrom test\nunpivot\n(\n value\n for col in (A, B, C, D)\n) unpiv\n
\n\nThe above works great if you have a known number of columns, but if you have 800 columns that you want to transform, you might want to use dynamic sql to perform this:
\nDECLARE @colsUnpivot AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n\nset @query \n = 'select col+'':''+cast(value as varchar(10)) col\n from test\n unpivot\n (\n value\n for col in ('+ @colsunpivot +')\n ) u'\n\nexec(@query)\n
\n\nNote: when using UNPIVOT the datatypes of all of the columns that need to be transformed must be the same. So you might have to cast/convert data as needed.
\nEdit #1, since your datatypes are different on all of your columns and you need to unpivot them, then you can use the following code.
\nThe first piece get the list of columns that you want to unpivot dynamically:
\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n
\nThe second piece gets the same list of columns but wraps each column in a cast as a varchar:
\nselect @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n
\nThen your final query will be:
\nDECLARE @colsUnpivot AS NVARCHAR(MAX),\n @colsUnpivotCast AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n\nselect @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n\n\nset @query \n = 'select col+'':''+value col\n from\n (\n select '+@colsUnpivotCast+'\n from test\n ) src\n unpivot\n (\n value\n for col in ('+ @colsunpivot +')\n ) u'\n\n\nexec(@query)\n
\n\nThe UNPIVOT function is performing the same process as a UNION ALL which would look like this:
\nselect col+':'+value as col\nfrom\n(\n select A value, 'A' col\n from test\n union all\n select cast(B as varchar(10)) value, 'B' col\n from test\n union all\n select cast(C as varchar(10)) value, 'C' col\n from test\n union all\n select cast(D as varchar(10)) value, 'D' col\n from test\n) src\n
\n\nThe result of all of the queries is the same:
\n| COL |\n----------\n| A:1 |\n| B:2.00 |\n| C:3 |\n| D:4 |\n
\nEdit #2: using UNPIVOT strips out any of the null columns which could cause some data to drop. If that is the case, then you will want to wrap the columns with IsNull() to replace the null values:
\nDECLARE @colsUnpivot AS NVARCHAR(MAX),\n @colsUnpivotCast AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n\nselect @colsUnpivotCast = stuff((select ', IsNull(cast('+quotename(C.name)+' as varchar(50)), '''') as '+quotename(C.name)\n from sys.columns as C\n where C.object_id = object_id('test')\n for xml path('')), 1, 1, '')\n\n\nset @query \n = 'select col+'':''+value col\n from\n (\n select '+@colsUnpivotCast+'\n from test\n ) src\n unpivot\n (\n value\n for col in ('+ @colsunpivot +')\n ) u'\n\n\nexec(@query)\n
\n\nReplacing the null values, will give a result like this:
\n| COL |\n----------\n| A:1 |\n| B:2.00 |\n| C: |\n| D:4 |\n
\n
soup wrap:
You can use the UNPIVOT function to do this, the version below concatenates the column name and value together, but you can always display them as separate columns:
select col+':'+cast(value as varchar(10)) col
from test
unpivot
(
value
for col in (A, B, C, D)
) unpiv
The above works great if you have a known number of columns, but if you have 800 columns that you want to transform, you might want to use dynamic sql to perform this:
DECLARE @colsUnpivot AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
set @query
= 'select col+'':''+cast(value as varchar(10)) col
from test
unpivot
(
value
for col in ('+ @colsunpivot +')
) u'
exec(@query)
Note: when using UNPIVOT the datatypes of all of the columns that need to be transformed must be the same. So you might have to cast/convert data as needed.
Edit #1, since your datatypes are different on all of your columns and you need to unpivot them, then you can use the following code.
The first piece get the list of columns that you want to unpivot dynamically:
select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
The second piece gets the same list of columns but wraps each column in a cast as a varchar:
select @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
Then your final query will be:
DECLARE @colsUnpivot AS NVARCHAR(MAX),
@colsUnpivotCast AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
select @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
set @query
= 'select col+'':''+value col
from
(
select '+@colsUnpivotCast+'
from test
) src
unpivot
(
value
for col in ('+ @colsunpivot +')
) u'
exec(@query)
The UNPIVOT function is performing the same process as a UNION ALL which would look like this:
select col+':'+value as col
from
(
select A value, 'A' col
from test
union all
select cast(B as varchar(10)) value, 'B' col
from test
union all
select cast(C as varchar(10)) value, 'C' col
from test
union all
select cast(D as varchar(10)) value, 'D' col
from test
) src
The result of all of the queries is the same:
| COL |
----------
| A:1 |
| B:2.00 |
| C:3 |
| D:4 |
Edit #2: using UNPIVOT strips out any of the null columns which could cause some data to drop. If that is the case, then you will want to wrap the columns with IsNull() to replace the null values:
DECLARE @colsUnpivot AS NVARCHAR(MAX),
@colsUnpivotCast AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
select @colsUnpivotCast = stuff((select ', IsNull(cast('+quotename(C.name)+' as varchar(50)), '''') as '+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test')
for xml path('')), 1, 1, '')
set @query
= 'select col+'':''+value col
from
(
select '+@colsUnpivotCast+'
from test
) src
unpivot
(
value
for col in ('+ @colsunpivot +')
) u'
exec(@query)
Replacing the null values, will give a result like this:
| COL |
----------
| A:1 |
| B:2.00 |
| C: |
| D:4 |
qid & accept id:
(13862099, 13863447)
query:
Magento SQL query: Get all simple products that are "not visible individually"
soup:
There are many reasons to not do this without the ORM, all of which may (or may not) apply to your needs (store filters, reading data from the correct table, etc). At the very least, you can use the product collection object to build the query which you would run:
\n$coll = Mage::getModel('catalog/product')->getCollection();\n$coll->addAttributeToFilter('visibility' , Mage_Catalog_Model_Product_Visibility::VISIBILITY_NOT_VISIBLE);\necho $coll->getSelect();\n
\nThe resulting query will look like this:
\nSELECT `e`.*, IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) AS `visibility`\nFROM `catalog_product_entity` AS `e`\nINNER JOIN `catalog_product_entity_int` AS `at_visibility_default`\n ON (`at_visibility_default`.`entity_id` = `e`.`entity_id`)\n AND (`at_visibility_default`.`attribute_id` = '526')\n AND `at_visibility_default`.`store_id` = 0\nLEFT JOIN `catalog_product_entity_int` AS `at_visibility` ON (`at_visibility`.`entity_id` = `e`.`entity_id`)\n AND (`at_visibility`.`attribute_id` = '526')\n AND (`at_visibility`.`store_id` = 1)\nWHERE (IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) = '1')\n
\n
soup wrap:
There are many reasons to not do this without the ORM, all of which may (or may not) apply to your needs (store filters, reading data from the correct table, etc). At the very least, you can use the product collection object to build the query which you would run:
$coll = Mage::getModel('catalog/product')->getCollection();
$coll->addAttributeToFilter('visibility' , Mage_Catalog_Model_Product_Visibility::VISIBILITY_NOT_VISIBLE);
echo $coll->getSelect();
The resulting query will look like this:
SELECT `e`.*, IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) AS `visibility`
FROM `catalog_product_entity` AS `e`
INNER JOIN `catalog_product_entity_int` AS `at_visibility_default`
ON (`at_visibility_default`.`entity_id` = `e`.`entity_id`)
AND (`at_visibility_default`.`attribute_id` = '526')
AND `at_visibility_default`.`store_id` = 0
LEFT JOIN `catalog_product_entity_int` AS `at_visibility` ON (`at_visibility`.`entity_id` = `e`.`entity_id`)
AND (`at_visibility`.`attribute_id` = '526')
AND (`at_visibility`.`store_id` = 1)
WHERE (IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) = '1')
qid & accept id:
(13901809, 13902047)
query:
Sql how to remove duplicate records with merging values?
soup:
As far as I know, you can't do this, you can't UPDATE and DELETE in one single query. However, you can do this as two UPDATE and DELETE queries like so:
\nUPDATE Table1 t1\nINNER JOIN\n(\n SELECT val1, GROUP_CONCAT(val2 SEPARATOR ',') Val2\n FROM Table1\n GROUP BY val1\n) t2 ON t1.val1 = t2.val1\nSET t1.val2 = t2.val2;\n\nDELETE t\nFROM table1 t\nWHERE id NOT IN\n(\n SELECT ID\n FROM\n (\n SELECT MIN(ID) id, val1\n FROM table1\n GROUP BY val1\n ) sub\n );\n
\nThis will make the changes you want.
\nNote that: You have to put these two queries in one TRANSACTION.
\nSQL Fiddle Demo
\nThese two queries will make your table looks like:
\n| ID | VAL1 | VAL2 |\n------------------------\n| 1 | john | sam,joe |\n| 2 | larry | tom |\n
\n
soup wrap:
As far as I know, you can't do this, you can't UPDATE and DELETE in one single query. However, you can do this as two UPDATE and DELETE queries like so:
UPDATE Table1 t1
INNER JOIN
(
SELECT val1, GROUP_CONCAT(val2 SEPARATOR ',') Val2
FROM Table1
GROUP BY val1
) t2 ON t1.val1 = t2.val1
SET t1.val2 = t2.val2;
DELETE t
FROM table1 t
WHERE id NOT IN
(
SELECT ID
FROM
(
SELECT MIN(ID) id, val1
FROM table1
GROUP BY val1
) sub
);
This will make the changes you want.
Note that: You have to put these two queries in one TRANSACTION.
SQL Fiddle Demo
These two queries will make your table looks like:
| ID | VAL1 | VAL2 |
------------------------
| 1 | john | sam,joe |
| 2 | larry | tom |
qid & accept id:
(13967474, 13967524)
query:
count no of instance of tuple with same value in some attribute
soup:
Try this:
\nSELECT orderid, COUNT(orderid) no_of_iteraction\nFROM tblTemp \nGROUP BY orderid\n
\nOR
\nAs per your request using SUM function
\nSELECT orderid, SUM(1) no_of_iteraction\nFROM tblTemp \nGROUP BY orderid\n
\nOR
\nSELECT orderid, SUM(cnt)\nFROM (SELECT orderid, 1 cnt FROM tblTemp ORDER BY orderid) AS A \nGROUP BY orderid\n
\n
soup wrap:
Try this:
SELECT orderid, COUNT(orderid) no_of_iteraction
FROM tblTemp
GROUP BY orderid
OR
As per your request using SUM function
SELECT orderid, SUM(1) no_of_iteraction
FROM tblTemp
GROUP BY orderid
OR
SELECT orderid, SUM(cnt)
FROM (SELECT orderid, 1 cnt FROM tblTemp ORDER BY orderid) AS A
GROUP BY orderid
qid & accept id:
(14056134, 14056652)
query:
reusing result of SELECT statement within a CASE statement sqllite
soup:
This is an alternative way of structuring the query:
\nINSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)\n SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',\n coalesce((SELECT search_email.threadID\n FROM search_email \n WHERE search_email.subject MATCH '%query%' AND \n ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n )\n LIMIT 1\n ),\n \n )\n
\nThis is using a select instead of values. It gets the thread id that matches the conditions, or NULL if none match. The second clause of the coalesce is then run when the first is NULL. You can generate the new id there.
\nI do have a problem with this approach. To me, it seems that you should have a Thread table that manages the threads. The ThreadId should be an autoincremented id in this table. The emails table can then reference this id. In other words, I think the data model needs to be thought out in more detail.
\nThe following query will not query work, but it gives the idea of moving the thread to the subquery:
\nINSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)\n SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',\n coalesce(t.threadID,\n \n )\n from (SELECT search_email.threadID\n FROM search_email \n WHERE search_email.subject MATCH '%query%' AND \n ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n )\n LIMIT 1\n ) t\n
\nThe reason it will not work is because the from clause will return no rows rather than 1 row with a NULL value. So, to get what you want, you can use:
\n from (SELECT search_email.threadID\n FROM search_email \n WHERE search_email.subject MATCH '%query%' AND \n ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n )\n union all\n select NULL\n order by (case when threadId is not null then 1 else 0 end) desc\n LIMIT 1\n ) t\n
\nThis ensures that a NULL value is returned when there is no thread.
\n
soup wrap:
This is an alternative way of structuring the query:
INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)
SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',
coalesce((SELECT search_email.threadID
FROM search_email
WHERE search_email.subject MATCH '%query%' AND
((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
(search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
)
LIMIT 1
),
)
This is using a select instead of values. It gets the thread id that matches the conditions, or NULL if none match. The second clause of the coalesce is then run when the first is NULL. You can generate the new id there.
I do have a problem with this approach. To me, it seems that you should have a Thread table that manages the threads. The ThreadId should be an autoincremented id in this table. The emails table can then reference this id. In other words, I think the data model needs to be thought out in more detail.
The following query will not query work, but it gives the idea of moving the thread to the subquery:
INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)
SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',
coalesce(t.threadID,
)
from (SELECT search_email.threadID
FROM search_email
WHERE search_email.subject MATCH '%query%' AND
((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
(search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
)
LIMIT 1
) t
The reason it will not work is because the from clause will return no rows rather than 1 row with a NULL value. So, to get what you want, you can use:
from (SELECT search_email.threadID
FROM search_email
WHERE search_email.subject MATCH '%query%' AND
((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
(search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
)
union all
select NULL
order by (case when threadId is not null then 1 else 0 end) desc
LIMIT 1
) t
This ensures that a NULL value is returned when there is no thread.
qid & accept id:
(14091654, 14091682)
query:
How to query insert on updated rows?
soup:
UPDATE logs_month SET status ='1'\nWHERE DATE_FORMAT(month,"%m/%y") = '11/12';\nCOMMIT;\nINSERT INTO some_table (columns) values (select columns\nfrom logs_month where DATE_FORMAT(month,"%m/%y") = '11/12';\n
\nYou can do with TRIGGER also,
\nDELIMITER $$\nCREATE TRIGGER `logs_m` \nAFTER UPDATE ON `logs_month`\nFOR EACH ROW \nBEGIN\n IF NEW.status=1 THEN\n INSERT INTO some_table (field) values (NEW.field);\n END IF;\nEND$$\n\nDELIMITER ;\n
\nYou can do like this
\n
soup wrap:
UPDATE logs_month SET status ='1'
WHERE DATE_FORMAT(month,"%m/%y") = '11/12';
COMMIT;
INSERT INTO some_table (columns) values (select columns
from logs_month where DATE_FORMAT(month,"%m/%y") = '11/12';
You can do with TRIGGER also,
DELIMITER $$
CREATE TRIGGER `logs_m`
AFTER UPDATE ON `logs_month`
FOR EACH ROW
BEGIN
IF NEW.status=1 THEN
INSERT INTO some_table (field) values (NEW.field);
END IF;
END$$
DELIMITER ;
You can do like this
qid & accept id:
(14124763, 14124880)
query:
How to write a Sql query to find distinct values that have never met the following "Where Not(a=x and b=x)"
soup:
One way would be
\nSELECT DISTINCT CustomerId FROM Attributes a \nWHERE NOT EXISTS (\n SELECT * FROM Attributes forbidden \n WHERE forbidden.CustomerId = a.CustomerId AND forbidden.Class = _forbiddenClassValue_ AND forbidden.Code = _forbiddenCodeValue_\n)\n
\nor with join
\nSELECT DISTINCT a.CustomerId FROM Attributes a\nLEFT JOIN (\n SELECT CustomerId FROM Attributes\n WHERE Class = _forbiddenClassValue_ AND Code = _forbiddenCodeValue_\n) havingForbiddenPair ON a.CustomerId = havingForbiddenPair.CustomerId\nWHERE havingForbiddenPair.CustomerId IS NULL\n
\nYet another way is to use EXCEPT, as per ypercube's answer
\n
soup wrap:
One way would be
SELECT DISTINCT CustomerId FROM Attributes a
WHERE NOT EXISTS (
SELECT * FROM Attributes forbidden
WHERE forbidden.CustomerId = a.CustomerId AND forbidden.Class = _forbiddenClassValue_ AND forbidden.Code = _forbiddenCodeValue_
)
or with join
SELECT DISTINCT a.CustomerId FROM Attributes a
LEFT JOIN (
SELECT CustomerId FROM Attributes
WHERE Class = _forbiddenClassValue_ AND Code = _forbiddenCodeValue_
) havingForbiddenPair ON a.CustomerId = havingForbiddenPair.CustomerId
WHERE havingForbiddenPair.CustomerId IS NULL
Yet another way is to use EXCEPT, as per ypercube's answer
qid & accept id:
(14159629, 14159677)
query:
sql table pivot
soup:
You did not specify what RDBMS you are using but this will work in all versions:
\nselect blog,\n id,\n max(case when attribute = 'pid' then value end) postid,\n max(case when attribute = 'date' then value end) date,\n max(case when attribute = 'title' then value end) title\nfrom yourtable\ngroup by blog, id\n
\n\nIf you are using a database with the PIVOT function, then your query will be like this:
\nselect blog, id, pid as postid, date, title\nfrom \n(\n select blog, id, attribute, value\n from yourtable\n) src\npivot\n(\n max(value)\n for attribute in (pid, date, title)\n) piv\n
\n\nThe result for both will be:
\n| BLOG | ID | POSTID | DATE | TITLE |\n-------------------------------------\n| p | 1 | abc1 | abc2 | abc3 |\n| p | 2 | abc1 | abc2 | abc3 |\n| p | 3 | abc1 | abc2 | abc3 |\n
\n
soup wrap:
You did not specify what RDBMS you are using but this will work in all versions:
select blog,
id,
max(case when attribute = 'pid' then value end) postid,
max(case when attribute = 'date' then value end) date,
max(case when attribute = 'title' then value end) title
from yourtable
group by blog, id
If you are using a database with the PIVOT function, then your query will be like this:
select blog, id, pid as postid, date, title
from
(
select blog, id, attribute, value
from yourtable
) src
pivot
(
max(value)
for attribute in (pid, date, title)
) piv
The result for both will be:
| BLOG | ID | POSTID | DATE | TITLE |
-------------------------------------
| p | 1 | abc1 | abc2 | abc3 |
| p | 2 | abc1 | abc2 | abc3 |
| p | 3 | abc1 | abc2 | abc3 |
qid & accept id:
(14168940, 14168955)
query:
How to delete rows in other database tables
soup:
You want to add ON DELETE CASCADE to your foreign key constraints.
\nFirst, drop the current constraint without a cascading delete.
\nALTER TABLE Session_Completed\nDROP PRIMARY KEY pk_SessionId\n
\nThen, re-add the constraint with the ON DELETE CASCADES:
\nALTER TABLE Session_Completed\n add CONSTRAINT fk_sessionid\n FOREIGN KEY (SessionId)\n REFERENCES session(SessionId)\n ON DELETE CASCADE;\n
\n
soup wrap:
You want to add ON DELETE CASCADE to your foreign key constraints.
First, drop the current constraint without a cascading delete.
ALTER TABLE Session_Completed
DROP PRIMARY KEY pk_SessionId
Then, re-add the constraint with the ON DELETE CASCADES:
ALTER TABLE Session_Completed
add CONSTRAINT fk_sessionid
FOREIGN KEY (SessionId)
REFERENCES session(SessionId)
ON DELETE CASCADE;
qid & accept id:
(14206236, 14206250)
query:
I have a table where i need to group and count 2 columns within a certain date range
soup:
SELECT LocationX, LocationY, City, Type, COUNT(*) CountOfLocation \nFROM tableName\nWHERE DateTimeStamp BETWEEN '2013-08-01 8:49:00' AND '2013-08-01 8:59:59'\nGROUP BY LocationX, LocationY, City, Type\n
\n\nUPDATE
\nSELECT LocationX, LocationY, City, Type, COUNT(*) AS CountOfLocation \nFROM tableName\nWHERE DateTimeStamp BETWEEN #2013-08-01 08:49:00# AND #2013-08-01 08:59:59#\nGROUP BY LocationX, LocationY, City, Type\n
\n
soup wrap:
SELECT LocationX, LocationY, City, Type, COUNT(*) CountOfLocation
FROM tableName
WHERE DateTimeStamp BETWEEN '2013-08-01 8:49:00' AND '2013-08-01 8:59:59'
GROUP BY LocationX, LocationY, City, Type
UPDATE
SELECT LocationX, LocationY, City, Type, COUNT(*) AS CountOfLocation
FROM tableName
WHERE DateTimeStamp BETWEEN #2013-08-01 08:49:00# AND #2013-08-01 08:59:59#
GROUP BY LocationX, LocationY, City, Type
qid & accept id:
(14211346, 14211957)
query:
How to remove white space characters from a string in SQL Server
soup:
Using ASCII(RIGHT(ProductAlternateKey, 1)) you can see that the right most character in row 2 is a Line Feed or Ascii Character 10.
\nThis can not be removed using the standard LTrim RTrim functions.
\nYou could however use (REPLACE(ProductAlternateKey, CHAR(10), '')
\nYou may also want to account for carriage returns and tabs. These three (Line feeds, carriage returns and tabs) are the usual culprits and can be removed with the following :
\nLTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ProductAlternateKey, CHAR(10), ''), CHAR(13), ''), CHAR(9), '')))\n
\nIf you encounter any more "white space" characters that can't be removed with the above then try one or all of the below:
\n--NULL\nReplace([YourString],CHAR(0),'');\n--Horizontal Tab\nReplace([YourString],CHAR(9),'');\n--Line Feed\nReplace([YourString],CHAR(10),'');\n--Vertical Tab\nReplace([YourString],CHAR(11),'');\n--Form Feed\nReplace([YourString],CHAR(12),'');\n--Carriage Return\nReplace([YourString],CHAR(13),'');\n--Column Break\nReplace([YourString],CHAR(14),'');\n--Non-breaking space\nReplace([YourString],CHAR(160),'');\n
\nThis list of potential white space characters could be used to create a function such as :
\nCreate Function [dbo].[CleanAndTrimString] \n(@MyString as varchar(Max))\nReturns varchar(Max)\nAs\nBegin\n --NULL\n Set @MyString = Replace(@MyString,CHAR(0),'');\n --Horizontal Tab\n Set @MyString = Replace(@MyString,CHAR(9),'');\n --Line Feed\n Set @MyString = Replace(@MyString,CHAR(10),'');\n --Vertical Tab\n Set @MyString = Replace(@MyString,CHAR(11),'');\n --Form Feed\n Set @MyString = Replace(@MyString,CHAR(12),'');\n --Carriage Return\n Set @MyString = Replace(@MyString,CHAR(13),'');\n --Column Break\n Set @MyString = Replace(@MyString,CHAR(14),'');\n --Non-breaking space\n Set @MyString = Replace(@MyString,CHAR(160),'');\n\n Set @MyString = LTRIM(RTRIM(@MyString));\n Return @MyString\nEnd\nGo\n
\nWhich you could then use as follows:
\nSelect \n dbo.CleanAndTrimString(ProductAlternateKey) As ProductAlternateKey\nfrom DimProducts\n
\n
soup wrap:
Using ASCII(RIGHT(ProductAlternateKey, 1)) you can see that the right most character in row 2 is a Line Feed or Ascii Character 10.
This can not be removed using the standard LTrim RTrim functions.
You could however use (REPLACE(ProductAlternateKey, CHAR(10), '')
You may also want to account for carriage returns and tabs. These three (Line feeds, carriage returns and tabs) are the usual culprits and can be removed with the following :
LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ProductAlternateKey, CHAR(10), ''), CHAR(13), ''), CHAR(9), '')))
If you encounter any more "white space" characters that can't be removed with the above then try one or all of the below:
--NULL
Replace([YourString],CHAR(0),'');
--Horizontal Tab
Replace([YourString],CHAR(9),'');
--Line Feed
Replace([YourString],CHAR(10),'');
--Vertical Tab
Replace([YourString],CHAR(11),'');
--Form Feed
Replace([YourString],CHAR(12),'');
--Carriage Return
Replace([YourString],CHAR(13),'');
--Column Break
Replace([YourString],CHAR(14),'');
--Non-breaking space
Replace([YourString],CHAR(160),'');
This list of potential white space characters could be used to create a function such as :
Create Function [dbo].[CleanAndTrimString]
(@MyString as varchar(Max))
Returns varchar(Max)
As
Begin
--NULL
Set @MyString = Replace(@MyString,CHAR(0),'');
--Horizontal Tab
Set @MyString = Replace(@MyString,CHAR(9),'');
--Line Feed
Set @MyString = Replace(@MyString,CHAR(10),'');
--Vertical Tab
Set @MyString = Replace(@MyString,CHAR(11),'');
--Form Feed
Set @MyString = Replace(@MyString,CHAR(12),'');
--Carriage Return
Set @MyString = Replace(@MyString,CHAR(13),'');
--Column Break
Set @MyString = Replace(@MyString,CHAR(14),'');
--Non-breaking space
Set @MyString = Replace(@MyString,CHAR(160),'');
Set @MyString = LTRIM(RTRIM(@MyString));
Return @MyString
End
Go
Which you could then use as follows:
Select
dbo.CleanAndTrimString(ProductAlternateKey) As ProductAlternateKey
from DimProducts
qid & accept id:
(14253673, 14256098)
query:
Applying Where clause for Order by in SQL
soup:
The problem is the table violates first normal form, EmpLotusNotes should not contain the name of an employee and the country, presumably the country they work in.
\nYou should challenge the reasons why you are not allowed to clean up the structure and the data.
\nSee https://www.google.com.au/search?q=sql+first+normal+form+atomic
\nThe answer, if you still cannot normalise the database after challenging, is create a query for countries, create a query to split the data in the first table into first normal form, then join the two.
\nAn example that works for mysql follows, for MS SQL you would use CHARINDEX instead of INSTR and substring instead of substr.
\nselect employeesWithCountries.*\n, countries.sort \nfrom (\n select empId, empLotusNotes, substr( empLotusNotes, afterStartOfDelimiter ) country from (\n select empId\n , empLotusNotes\n , INSTR( empLotusNotes, '/' ) + 1 as afterStartOfDelimiter \n from EmployeesLotusNotes\n ) employees\n) employeesWithCountries\ninner join (\n SELECT 'Japan' as country, 1 as sort\n union\n SELECT 'China' as country, 2 as sort\n union\n SELECT 'India' as country, 3 as sort\n union\n SELECT 'USA' as country, 4 as sort\n) countries\non employeesWithCountries.country = countries.country\norder by countries.sort, employeesWithCountries.empLotusNotes\n
\nResults.
\n30003 Kyo Jun/Japan Japan 1\n40004 Jee Lee/China China 2\n10001 Amit B/India India 3\n20002 Bharat C/India India 3\n50005 Xavier K/USA USA 4\n
\n
soup wrap:
The problem is the table violates first normal form, EmpLotusNotes should not contain the name of an employee and the country, presumably the country they work in.
You should challenge the reasons why you are not allowed to clean up the structure and the data.
See https://www.google.com.au/search?q=sql+first+normal+form+atomic
The answer, if you still cannot normalise the database after challenging, is create a query for countries, create a query to split the data in the first table into first normal form, then join the two.
An example that works for mysql follows, for MS SQL you would use CHARINDEX instead of INSTR and substring instead of substr.
select employeesWithCountries.*
, countries.sort
from (
select empId, empLotusNotes, substr( empLotusNotes, afterStartOfDelimiter ) country from (
select empId
, empLotusNotes
, INSTR( empLotusNotes, '/' ) + 1 as afterStartOfDelimiter
from EmployeesLotusNotes
) employees
) employeesWithCountries
inner join (
SELECT 'Japan' as country, 1 as sort
union
SELECT 'China' as country, 2 as sort
union
SELECT 'India' as country, 3 as sort
union
SELECT 'USA' as country, 4 as sort
) countries
on employeesWithCountries.country = countries.country
order by countries.sort, employeesWithCountries.empLotusNotes
Results.
30003 Kyo Jun/Japan Japan 1
40004 Jee Lee/China China 2
10001 Amit B/India India 3
20002 Bharat C/India India 3
50005 Xavier K/USA USA 4
qid & accept id:
(14285554, 14288371)
query:
Zend Database Table getrow
soup:
There does not appear a way to do it in a single simple query. Also, fetchOne only gets the first column of the first record. That helper would return just the ID, and not the product_key.
\nOption 1: \nModify your ProductKeys model to get the key and set it as used:
\nclass My_Model_ProductKeys extends Zend_Db_Table_Abstract\n...\nfunction getKeyAndMarkUsed()\n{\n $select = $this->select();\n $select->where('used=?',1)->limit(1)->order('ID');\n $keyRow = $this->fetchRow();\n if ($keyRow){\n $this->update(array('used'=>1),'id='.$keyRow->ID);\n return $keyRow->id;\n }\n else{\n //no keys left! what to do??? Create a new key?\n throw new Exception('No keys left!');\n }\n}\n
\nThen you would just:
\n$productKey = $this->_helper->model('ProductKeys')->getKeyAndMarkUsed();\n
\nOption 2: \nMake a database procedure to do the above functionality and call that instead.
\n
soup wrap:
There does not appear a way to do it in a single simple query. Also, fetchOne only gets the first column of the first record. That helper would return just the ID, and not the product_key.
Option 1:
Modify your ProductKeys model to get the key and set it as used:
class My_Model_ProductKeys extends Zend_Db_Table_Abstract
...
function getKeyAndMarkUsed()
{
$select = $this->select();
$select->where('used=?',1)->limit(1)->order('ID');
$keyRow = $this->fetchRow();
if ($keyRow){
$this->update(array('used'=>1),'id='.$keyRow->ID);
return $keyRow->id;
}
else{
//no keys left! what to do??? Create a new key?
throw new Exception('No keys left!');
}
}
Then you would just:
$productKey = $this->_helper->model('ProductKeys')->getKeyAndMarkUsed();
Option 2:
Make a database procedure to do the above functionality and call that instead.
qid & accept id:
(14286714, 14287267)
query:
SQL sum of column value, unique per user per day
soup:
Try something like:
\nSELECT\n DATE(created_at) AS date,\n SUM(CASE WHEN state = 'complete' THEN 1 ELSE 0 END) AS complete,\n SUM(CASE WHEN state = 'paid' THEN 1 ELSE 0 END) AS paid,\n COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id ELSE NULL END) AS in_progress,\n COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id ELSE NULL END) AS failed\nFROM orders\nWHERE created_at BETWEEN ? AND ?\nGROUP BY DATE(created_at);\n
\nThe main idea - COUNT (DISTINCT ...) will count unique user_id and wont count NULL values.
\nDetails: aggregate functions, 4.2.7. Aggregate Expressions
\nThe whole query with same style counts and simplified CASE WHEN ...:
\nSELECT\n DATE(created_at) AS date,\n COUNT(CASE WHEN state = 'complete' THEN 1 END) AS complete,\n COUNT(CASE WHEN state = 'paid' THEN 1 END) AS paid,\n COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id END) AS in_progress,\n COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id END) AS failed\nFROM orders\nWHERE created_at BETWEEN ? AND ?\nGROUP BY DATE(created_at);\n
\n
soup wrap:
Try something like:
SELECT
DATE(created_at) AS date,
SUM(CASE WHEN state = 'complete' THEN 1 ELSE 0 END) AS complete,
SUM(CASE WHEN state = 'paid' THEN 1 ELSE 0 END) AS paid,
COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id ELSE NULL END) AS in_progress,
COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id ELSE NULL END) AS failed
FROM orders
WHERE created_at BETWEEN ? AND ?
GROUP BY DATE(created_at);
The main idea - COUNT (DISTINCT ...) will count unique user_id and wont count NULL values.
Details: aggregate functions, 4.2.7. Aggregate Expressions
The whole query with same style counts and simplified CASE WHEN ...:
SELECT
DATE(created_at) AS date,
COUNT(CASE WHEN state = 'complete' THEN 1 END) AS complete,
COUNT(CASE WHEN state = 'paid' THEN 1 END) AS paid,
COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id END) AS in_progress,
COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id END) AS failed
FROM orders
WHERE created_at BETWEEN ? AND ?
GROUP BY DATE(created_at);
qid & accept id:
(14296002, 14296370)
query:
Need to find Average of top 3 records grouped by ID in SQL
soup:
First - get the max(maxattached) for every customer and month:
\nSELECT id,\n max(maxattached) as max_att \nFROM myTable \nWHERE weekending >= now() - interval '1 year' \nGROUP BY id, date_trunc('month',weekending);\n
\nNext - for every customer rank all his values:
\nSELECT id,\n max_att,\n row_number() OVER (PARTITION BY id ORDER BY max_att DESC) as max_att_rank\nFROM ;\n
\nNext - get the top 3 for every customer:
\nSELECT id,\n max_att\nFROM \nWHERE max_att_rank <= 3;\n
\nNext - get the avg of the values for every customer:
\nSELECT id,\n avg(max_att) as avg_att\nFROM \nGROUP BY id;\n
\nNext - just put all the queries together and rewrite/simplify them for your case.
\nUPDATE: Here is an SQLFiddle with your test data and the queries: SQLFiddle.
\nUPDATE2: Here is the query, that will work on 8.1 :
\nSELECT customer_id,\n (SELECT round(avg(max_att),0)\n FROM (SELECT max(maxattached) as max_att \n FROM table1\n WHERE weekending >= now() - interval '2 year' \n AND id = ct.customer_id\n GROUP BY date_trunc('month',weekending)\n ORDER BY max_att DESC\n LIMIT 3) sub \n ) as avg_att\nFROM customer_table ct;\n
\nThe idea - to take your initial query and run it for every customer (customer_table - table with all unique id for customers).
\nHere is SQLFiddle with this query: SQLFiddle.
\nOnly tested on version 8.3 (8.1 is too old to be on SQLFiddle).
\n
soup wrap:
First - get the max(maxattached) for every customer and month:
SELECT id,
max(maxattached) as max_att
FROM myTable
WHERE weekending >= now() - interval '1 year'
GROUP BY id, date_trunc('month',weekending);
Next - for every customer rank all his values:
SELECT id,
max_att,
row_number() OVER (PARTITION BY id ORDER BY max_att DESC) as max_att_rank
FROM ;
Next - get the top 3 for every customer:
SELECT id,
max_att
FROM
WHERE max_att_rank <= 3;
Next - get the avg of the values for every customer:
SELECT id,
avg(max_att) as avg_att
FROM
GROUP BY id;
Next - just put all the queries together and rewrite/simplify them for your case.
UPDATE: Here is an SQLFiddle with your test data and the queries: SQLFiddle.
UPDATE2: Here is the query, that will work on 8.1 :
SELECT customer_id,
(SELECT round(avg(max_att),0)
FROM (SELECT max(maxattached) as max_att
FROM table1
WHERE weekending >= now() - interval '2 year'
AND id = ct.customer_id
GROUP BY date_trunc('month',weekending)
ORDER BY max_att DESC
LIMIT 3) sub
) as avg_att
FROM customer_table ct;
The idea - to take your initial query and run it for every customer (customer_table - table with all unique id for customers).
Here is SQLFiddle with this query: SQLFiddle.
Only tested on version 8.3 (8.1 is too old to be on SQLFiddle).
qid & accept id:
(14313834, 14314043)
query:
Apply the same aggregate to every column in a table
soup:
First, since COUNT() only counts non-null values, your query can be simplified:
\nSELECT count(DISTINCT names) AS unique_names\n ,count(names) AS names_not_null\nFROM table;\n
\nBut that's the number of non-null values and contradicts your description:
\n\ncount of the number of null values in the column
\n
\nFor that you would use:
\ncount(*) - count(names) AS names_null\n
\nSince count(*) count all rows and count(names) only rows with non-null names.
\nRemoved inferior alternative after hint by @Andriy.
\nTo automate that for all columns build an SQL statement off of the catalog table pg_attribute dynamically. You can use EXECUTE in a PL/pgSQL function to execute it immediately. Find full code examples with links to the manual and explanation under these closely related questions:
\n\n- How to perform the same aggregation on every column, without listing the columns?
\n- postgresql - count (no null values) of each column in a table
\n
\n
soup wrap:
First, since COUNT() only counts non-null values, your query can be simplified:
SELECT count(DISTINCT names) AS unique_names
,count(names) AS names_not_null
FROM table;
But that's the number of non-null values and contradicts your description:
count of the number of null values in the column
For that you would use:
count(*) - count(names) AS names_null
Since count(*) count all rows and count(names) only rows with non-null names.
Removed inferior alternative after hint by @Andriy.
To automate that for all columns build an SQL statement off of the catalog table pg_attribute dynamically. You can use EXECUTE in a PL/pgSQL function to execute it immediately. Find full code examples with links to the manual and explanation under these closely related questions:
- How to perform the same aggregation on every column, without listing the columns?
- postgresql - count (no null values) of each column in a table
qid & accept id:
(14345171, 14477252)
query:
How to get data from two databases in two servers with one SELECT statement?
soup:
I have done this with MySQL,Oracle and SQL server. You can create linked servers from a central MSSQL server to your Oracle and other MSSQL servers. You can then either query the object directly using the linked server or you can create a synonymn to the linked server tables in your database.
\nSteps around creating and using a linked server are:
\n\n- On your "main" MSSQL server create two linked servers to the servers that contains the two databases or as you said database A and database B.
\n- You can then query the tables on the linked servers directly using plain TSQL select statements.
\n
\nTo create a linked server to Oracle see this link: http://support.microsoft.com/kb/280106
\nA little more about synonyms. If you are going to be using these linked server tables in a LOT of queries it might be worth the effort to use synonymns to help maintain the code for you. A synonymn allows you to reference something under a different name.
\nSo for example when selecting data from a linked server you would generally use the following syntax to get the data:
\nSELECT *\nFROM Linkedserver.database.schema.table\n
\nIf you created a synonym for Linkedserver.database.schema.table as DBTable1 the syntax would be:
\nSELECT *\nFROM DBTable1\n
\nIt saves a bit on typing plus if your linked server ever changed you would not need to go do changes all over your code. Like I said this can really be of benefit if you use linked servers in a lot of code.
\nOn a more cautionary note you CAN do a join between two table on different servers. HOwever this is normally painfully slow. I have found that you can select the data from the different server into temp tables and joining the temp tables can generally speed things up. Your milage might vary but if you are going to join the tables on the different servers this technique can help.
\nLet me know if you need more details.
\n
soup wrap:
I have done this with MySQL,Oracle and SQL server. You can create linked servers from a central MSSQL server to your Oracle and other MSSQL servers. You can then either query the object directly using the linked server or you can create a synonymn to the linked server tables in your database.
Steps around creating and using a linked server are:
- On your "main" MSSQL server create two linked servers to the servers that contains the two databases or as you said database A and database B.
- You can then query the tables on the linked servers directly using plain TSQL select statements.
To create a linked server to Oracle see this link: http://support.microsoft.com/kb/280106
A little more about synonyms. If you are going to be using these linked server tables in a LOT of queries it might be worth the effort to use synonymns to help maintain the code for you. A synonymn allows you to reference something under a different name.
So for example when selecting data from a linked server you would generally use the following syntax to get the data:
SELECT *
FROM Linkedserver.database.schema.table
If you created a synonym for Linkedserver.database.schema.table as DBTable1 the syntax would be:
SELECT *
FROM DBTable1
It saves a bit on typing plus if your linked server ever changed you would not need to go do changes all over your code. Like I said this can really be of benefit if you use linked servers in a lot of code.
On a more cautionary note you CAN do a join between two table on different servers. HOwever this is normally painfully slow. I have found that you can select the data from the different server into temp tables and joining the temp tables can generally speed things up. Your milage might vary but if you are going to join the tables on the different servers this technique can help.
Let me know if you need more details.
qid & accept id:
(14355527, 14355745)
query:
Get row, if ID is not in Array/comma-seperated-list
soup:
Don't really like your solution. You are making things a lot harder for yourself with your underlying database design.
\nYou have two tables, one representing users and another representing questions. What you really need is a table linking the two concepts, something like user-questions.
\nSuggested design:-
\ncreate table `user-questions`\n(\n user_id int,\n question_id int,\n answered datetime\n)\n
\nSuggested approach for recording answers.
\nEvery time your user answers a question, whack a row into user-questions to signify the fact that a user has answered the question.
\nUnder this structure, solving your specific problem, finding questions that haven't been answered yet, becomes trivial.
\n-- Find a question that hasn't been answered by user id 22.\nSELECT\n q.* \nFROM \n `questions`\nLEFT OUTER JOIN `user-questions` uq\nON q.question_id = uq.question_id\n-- Just a sample user ID\nAND uq.user_id = 22\nWHERE\n uq.question_id IS NULL\n
\nI don't play day to day with MySQL, so please feel free to correct any typos, SO'ers. The approach is sound, though.
\n
soup wrap:
Don't really like your solution. You are making things a lot harder for yourself with your underlying database design.
You have two tables, one representing users and another representing questions. What you really need is a table linking the two concepts, something like user-questions.
Suggested design:-
create table `user-questions`
(
user_id int,
question_id int,
answered datetime
)
Suggested approach for recording answers.
Every time your user answers a question, whack a row into user-questions to signify the fact that a user has answered the question.
Under this structure, solving your specific problem, finding questions that haven't been answered yet, becomes trivial.
-- Find a question that hasn't been answered by user id 22.
SELECT
q.*
FROM
`questions`
LEFT OUTER JOIN `user-questions` uq
ON q.question_id = uq.question_id
-- Just a sample user ID
AND uq.user_id = 22
WHERE
uq.question_id IS NULL
I don't play day to day with MySQL, so please feel free to correct any typos, SO'ers. The approach is sound, though.
qid & accept id:
(14366759, 14368352)
query:
How do you perform a join to a table with "OR" conditions?
soup:
SELECT o.* \nFROM dbo.Orders o\nWHERE EXISTS ( SELECT * FROM dbo.Transactions t1 \n WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX33'\n )\n AND EXISTS ( SELECT * FROM dbo.Transactions t2 \n WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX34'\n )\n AND\n ( EXISTS ( SELECT * FROM dbo.Transactions t1 \n WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX35'\n )\n AND EXISTS ( SELECT * FROM dbo.Transactions t2 \n WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX36'\n\n OR EXISTS ( SELECT * FROM dbo.Transactions t \n WHERE t.OrderId = o.OrderId AND t.Code = 'TX37'\n )\n\n OR EXISTS ( SELECT * FROM dbo.Transactions t1 \n WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX38'\n )\n AND EXISTS ( SELECT * FROM dbo.Transactions t2 \n WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX39'\n )\n ) ;\n
\n
\nYou could also write it like this:
\nSELECT o.* \nFROM dbo.Orders o\n JOIN\n ( SELECT OrderId\n FROM dbo.Transactions\n WHERE Code IN ('TX33', 'TX34', 'TX35', 'TX36', 'TX37', 'TX38', 'TX39')\n GROUP BY OrderId\n HAVING COUNT(DISTINCT CASE WHEN Code = 'TX33' THEN Code END) = 1\n AND COUNT(DISTINCT CASE WHEN Code = 'TX34' THEN Code END) = 1\n AND ( COUNT(DISTINCT \n CASE WHEN Code IN ('TX35', 'TX36') THEN Code END) = 2\n OR COUNT(DISTINCT CASE WHEN Code = 'TX37' THEN Code END) = 1\n OR COUNT(DISTINCT \n CASE WHEN Code IN ('TX38', 'TX39') THEN Code END) = 2\n ) \n ) t\n ON t.OrderId = o.OrderId ;\n
\n
soup wrap:
SELECT o.*
FROM dbo.Orders o
WHERE EXISTS ( SELECT * FROM dbo.Transactions t1
WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX33'
)
AND EXISTS ( SELECT * FROM dbo.Transactions t2
WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX34'
)
AND
( EXISTS ( SELECT * FROM dbo.Transactions t1
WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX35'
)
AND EXISTS ( SELECT * FROM dbo.Transactions t2
WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX36'
OR EXISTS ( SELECT * FROM dbo.Transactions t
WHERE t.OrderId = o.OrderId AND t.Code = 'TX37'
)
OR EXISTS ( SELECT * FROM dbo.Transactions t1
WHERE t1.OrderId = o.OrderId AND t1.Code = 'TX38'
)
AND EXISTS ( SELECT * FROM dbo.Transactions t2
WHERE t2.OrderId = o.OrderId AND t2.Code = 'TX39'
)
) ;
You could also write it like this:
SELECT o.*
FROM dbo.Orders o
JOIN
( SELECT OrderId
FROM dbo.Transactions
WHERE Code IN ('TX33', 'TX34', 'TX35', 'TX36', 'TX37', 'TX38', 'TX39')
GROUP BY OrderId
HAVING COUNT(DISTINCT CASE WHEN Code = 'TX33' THEN Code END) = 1
AND COUNT(DISTINCT CASE WHEN Code = 'TX34' THEN Code END) = 1
AND ( COUNT(DISTINCT
CASE WHEN Code IN ('TX35', 'TX36') THEN Code END) = 2
OR COUNT(DISTINCT CASE WHEN Code = 'TX37' THEN Code END) = 1
OR COUNT(DISTINCT
CASE WHEN Code IN ('TX38', 'TX39') THEN Code END) = 2
)
) t
ON t.OrderId = o.OrderId ;
qid & accept id:
(14372302, 14372345)
query:
Sql query to get result from 3 tables
soup:
You should use UNION. Try this (untested):
\nSELECT t.title_name, s.source_name, t1.text_content, t1.added_date \nFROM Table1 t1\nJOIN Title T \n ON t1.TitleId = T.TitleId\nJOIN Source S \n ON t1.SourceId = S.SourceId\nUNION\nSELECT t.title_name, s.source_name, t2.description, t2.added_date \nFROM Table2 t2\nJOIN Title T \n ON t2.TitleId = T.TitleId\nJOIN Source S \n ON t2.SourceId = S.SourceId\nUNION\nSELECT t.title_name, s.source_name, t3.description, t3.added_date \nFROM Table3 t3\nJOIN Title T \n ON t3.TitleId = T.TitleId\nJOIN Source S \n ON t3.SourceId = S.SourceId\n
\nWell I just realized you don't have a SourceId or TitleId in your Table3. Not going to be able to get that information, but you could still do:
\nSELECT DISTINCT Title_Name, Source_Name, Text_Content, Added_Date\nFROM \n(\n SELECT t.title_name, s.source_name, t1.text_content, t1.added_date \n FROM Table1 t1\n JOIN Title T \n ON t1.TitleId = T.TitleId\n JOIN Source S \n ON t1.SourceId = S.SourceId\n UNION\n SELECT t.title_name, s.source_name, t2.description, t2.added_date \n FROM Table2 t2\n JOIN Title T \n ON t2.TitleId = T.TitleId\n JOIN Source S \n ON t2.SourceId = S.SourceId\n UNION\n SELECT t3.title, 'Unknown', t3.description, t3.added_date \n FROM Table3 t3\n) t\nORDER BY added_date\n
\n
soup wrap:
You should use UNION. Try this (untested):
SELECT t.title_name, s.source_name, t1.text_content, t1.added_date
FROM Table1 t1
JOIN Title T
ON t1.TitleId = T.TitleId
JOIN Source S
ON t1.SourceId = S.SourceId
UNION
SELECT t.title_name, s.source_name, t2.description, t2.added_date
FROM Table2 t2
JOIN Title T
ON t2.TitleId = T.TitleId
JOIN Source S
ON t2.SourceId = S.SourceId
UNION
SELECT t.title_name, s.source_name, t3.description, t3.added_date
FROM Table3 t3
JOIN Title T
ON t3.TitleId = T.TitleId
JOIN Source S
ON t3.SourceId = S.SourceId
Well I just realized you don't have a SourceId or TitleId in your Table3. Not going to be able to get that information, but you could still do:
SELECT DISTINCT Title_Name, Source_Name, Text_Content, Added_Date
FROM
(
SELECT t.title_name, s.source_name, t1.text_content, t1.added_date
FROM Table1 t1
JOIN Title T
ON t1.TitleId = T.TitleId
JOIN Source S
ON t1.SourceId = S.SourceId
UNION
SELECT t.title_name, s.source_name, t2.description, t2.added_date
FROM Table2 t2
JOIN Title T
ON t2.TitleId = T.TitleId
JOIN Source S
ON t2.SourceId = S.SourceId
UNION
SELECT t3.title, 'Unknown', t3.description, t3.added_date
FROM Table3 t3
) t
ORDER BY added_date
qid & accept id:
(14374677, 14374705)
query:
Update a field just another one has some condition
soup:
you can use inline IF statement. eg
\nUPDATE articles\nSET publishedDate = IF(published = 1, 'new date HERE', publishedDate)\n-- WHERE condition here\n
\nthis assumes that 1 = true, if you store boolean as string then IF(published = 'true',...
\nUPDATE 1
\n-- assumes 0 = false, 1 = true\nSET @status := 1;\nSET @newDate := CURDATE();\n\nUPDATE articles\nSET publishedDate = IF(1 = @status, @newDate, publishedDate),\n published = @status\n-- WHERE condition here\n
\n
soup wrap:
you can use inline IF statement. eg
UPDATE articles
SET publishedDate = IF(published = 1, 'new date HERE', publishedDate)
-- WHERE condition here
this assumes that 1 = true, if you store boolean as string then IF(published = 'true',...
UPDATE 1
-- assumes 0 = false, 1 = true
SET @status := 1;
SET @newDate := CURDATE();
UPDATE articles
SET publishedDate = IF(1 = @status, @newDate, publishedDate),
published = @status
-- WHERE condition here
qid & accept id:
(14385741, 14385817)
query:
Retrieve rows from a certain day but only in a certain hour
soup:
SELECT columns FROM dbo.table2\nWHERE \n CONVERT(DATE, given_schedule) \n = CONVERT(DATE, DATEADD(DAY, -3, CURRENT_TIMESTAMP))\nAND \n DATEPART(HOUR, given_schedule) \n = DATEPART(HOUR, CURRENT_TIMESTAMP);\n
\nTo address @Habo's point, you could also do:
\nDECLARE @s SMALLDATETIME = CURRENT_TIMESTAMP;\n\nSET @s = DATEADD(DAY, -3, DATEADD(MINUTE, -DATEPART(MINUTE, @s), @s));\n\nSELECT columns FROM dbo.table2\n WHERE given_schedule >= @s\n AND given_schedule < DATEADD(HOUR, 1, @s);\n
\nThis is, of course, most useful if there is actually an index with given_schedule as the leading column.
\n
soup wrap:
SELECT columns FROM dbo.table2
WHERE
CONVERT(DATE, given_schedule)
= CONVERT(DATE, DATEADD(DAY, -3, CURRENT_TIMESTAMP))
AND
DATEPART(HOUR, given_schedule)
= DATEPART(HOUR, CURRENT_TIMESTAMP);
To address @Habo's point, you could also do:
DECLARE @s SMALLDATETIME = CURRENT_TIMESTAMP;
SET @s = DATEADD(DAY, -3, DATEADD(MINUTE, -DATEPART(MINUTE, @s), @s));
SELECT columns FROM dbo.table2
WHERE given_schedule >= @s
AND given_schedule < DATEADD(HOUR, 1, @s);
This is, of course, most useful if there is actually an index with given_schedule as the leading column.
qid & accept id:
(14400023, 14400480)
query:
Display columns that contain a carriage return
soup:
This sounds like a homework question. So, let me give you some hints:
\n(1) You can generate a table using syntax, such as:
\nselect chr(13) as badchar from dual union all\nselect '!' . . .\n
\n(2) You can cross join this into the table and use a very similar where clause.
\n(3) You can then select the bad character from the table.
\n(4) You'll need an aggregation.
\nActually, I would be inclined to drop the requirement of one row per student and instead have one row per student/bad character. Here is an approach:
\nselect a.id,\n a.addr_1, a.addr_2, a.addr_3, a.addr_4, a.addr_5, a.addr_6, a.addr_7,\n ((case when INSTR(a.addr_1, b.badChar) > 0 then 'addr_1,' else '' end) ||\n (case when INSTR(a.addr_2, b.badChar) > 0 then 'addr_2,' else '' end) ||\n (case when INSTR(a.addr_3, b.badChar) > 0 then 'addr_3,' else '' end) ||\n (case when INSTR(a.addr_4, b.badChar) > 0 then 'addr_4,' else '' end) ||\n (case when INSTR(a.addr_5, b.badChar) > 0 then 'addr_5,' else '' end) ||\n (case when INSTR(a.addr_6, b.badChar) > 0 then 'addr_6,' else '' end) ||\n (case when INSTR(a.addr_7, b.badChar) > 0 then 'addr_7,' else '' end)\n ) as addrs,\n b.badChar\nfrom a cross join\n (select chr(13) as badChar from dual) as b\nWHERE INSTR(a.addr_1, b.badChar) > 0 OR\n INSTR(a.addr_2, b.badChar) > 0 OR\n INSTR(a.addr_3, b.badChar) > 0 OR\n INSTR(a.addr_4, b.badChar) > 0 OR\n INSTR(a.addr_5, b.badChar) > 0 OR\n INSTR(a.addr_6, b.badChar) > 0 OR\n INSTR(a.addr_7, b.badChar) > 0;\n
\nIt leaves an extra comma at the end of the column names. This can be removed by making this a subquery and doing string manipulations at the next level.
\nTo put all badchars on one line would require an aggregation. However, I am not clear what the 9th and 10th columns would contain in that case.
\n
soup wrap:
This sounds like a homework question. So, let me give you some hints:
(1) You can generate a table using syntax, such as:
select chr(13) as badchar from dual union all
select '!' . . .
(2) You can cross join this into the table and use a very similar where clause.
(3) You can then select the bad character from the table.
(4) You'll need an aggregation.
Actually, I would be inclined to drop the requirement of one row per student and instead have one row per student/bad character. Here is an approach:
select a.id,
a.addr_1, a.addr_2, a.addr_3, a.addr_4, a.addr_5, a.addr_6, a.addr_7,
((case when INSTR(a.addr_1, b.badChar) > 0 then 'addr_1,' else '' end) ||
(case when INSTR(a.addr_2, b.badChar) > 0 then 'addr_2,' else '' end) ||
(case when INSTR(a.addr_3, b.badChar) > 0 then 'addr_3,' else '' end) ||
(case when INSTR(a.addr_4, b.badChar) > 0 then 'addr_4,' else '' end) ||
(case when INSTR(a.addr_5, b.badChar) > 0 then 'addr_5,' else '' end) ||
(case when INSTR(a.addr_6, b.badChar) > 0 then 'addr_6,' else '' end) ||
(case when INSTR(a.addr_7, b.badChar) > 0 then 'addr_7,' else '' end)
) as addrs,
b.badChar
from a cross join
(select chr(13) as badChar from dual) as b
WHERE INSTR(a.addr_1, b.badChar) > 0 OR
INSTR(a.addr_2, b.badChar) > 0 OR
INSTR(a.addr_3, b.badChar) > 0 OR
INSTR(a.addr_4, b.badChar) > 0 OR
INSTR(a.addr_5, b.badChar) > 0 OR
INSTR(a.addr_6, b.badChar) > 0 OR
INSTR(a.addr_7, b.badChar) > 0;
It leaves an extra comma at the end of the column names. This can be removed by making this a subquery and doing string manipulations at the next level.
To put all badchars on one line would require an aggregation. However, I am not clear what the 9th and 10th columns would contain in that case.
qid & accept id:
(14416241, 14416569)
query:
ORACLE Parsing XML string into separate records
soup:
you can use XMLTABLE. as your XML document seems to be a fragment in the row, i've wrapped this in a element.
\nselect grp, substr(name, \n instr(name, '/', -1) + 1,\n instr(name, '@') - instr(name, '/', -1) - 1\n ) name\n from mytab m, \n xmltable(xmlnamespaces('DAV:' as "D"), \n '/root/D:href' passing xmltype(''||usr||' ')\n columns\n name varchar2(200) path './text()');\n
\ni've assumed a table where your xml column is stored as a clob/varchar2 called (usr) .
\nexample output for group1:
\nSQL> select grp, substr(name,\n 2 instr(name, '/', -1) + 1,\n 3 instr(name, '@') - instr(name, '/', -1) - 1\n 4 ) name\n 5 from mytab m,\n 6 xmltable(xmlnamespaces('DAV:' as "D"),\n 7 '/root/D:href' passing xmltype(''||usr||' ')\n 8 COLUMNS\n 9 name VARCHAR2(200) path './text()');\n\nGRP NAME\n------ ----------\ngroup1 admin\ngroup1 oracle\ngroup1 user1\n
\nhttp://sqlfiddle.com/#!4/435cd/1
\n
soup wrap:
you can use XMLTABLE. as your XML document seems to be a fragment in the row, i've wrapped this in a element.
select grp, substr(name,
instr(name, '/', -1) + 1,
instr(name, '@') - instr(name, '/', -1) - 1
) name
from mytab m,
xmltable(xmlnamespaces('DAV:' as "D"),
'/root/D:href' passing xmltype(''||usr||' ')
columns
name varchar2(200) path './text()');
i've assumed a table where your xml column is stored as a clob/varchar2 called (usr) .
example output for group1:
SQL> select grp, substr(name,
2 instr(name, '/', -1) + 1,
3 instr(name, '@') - instr(name, '/', -1) - 1
4 ) name
5 from mytab m,
6 xmltable(xmlnamespaces('DAV:' as "D"),
7 '/root/D:href' passing xmltype(''||usr||' ')
8 COLUMNS
9 name VARCHAR2(200) path './text()');
GRP NAME
------ ----------
group1 admin
group1 oracle
group1 user1
http://sqlfiddle.com/#!4/435cd/1
qid & accept id:
(14442822, 14443077)
query:
Using a date field from a form in an access query
soup:
Use a PARAMETERS clause as the first line of your SQL to inform the db engine the form control contains a Date/Time value.
\nPARAMETERS Forms!Frm_Start![Date] DateTime;\n
\nThen use the parameter with DateAdd() in your WHERE clause:
\nWHERE DateValue([TIMESTAMP])=DateAdd("d", 1, Forms!Frm_Start![Date])\n
\nHowever, that will require running DateValue() for every row in the table. This should be faster with [TIMESTAMP] indexed:
\nWHERE\n [TIMESTAMP] >= DateAdd("d", 1, Forms!Frm_Start![Date])\n AND [TIMESTAMP] < DateAdd("d", 2, Forms!Frm_Start![Date])\n
\n
soup wrap:
Use a PARAMETERS clause as the first line of your SQL to inform the db engine the form control contains a Date/Time value.
PARAMETERS Forms!Frm_Start![Date] DateTime;
Then use the parameter with DateAdd() in your WHERE clause:
WHERE DateValue([TIMESTAMP])=DateAdd("d", 1, Forms!Frm_Start![Date])
However, that will require running DateValue() for every row in the table. This should be faster with [TIMESTAMP] indexed:
WHERE
[TIMESTAMP] >= DateAdd("d", 1, Forms!Frm_Start![Date])
AND [TIMESTAMP] < DateAdd("d", 2, Forms!Frm_Start![Date])
qid & accept id:
(14446303, 14446434)
query:
Find records that have related records in the past
soup:
Try using NOT EXISTS instead of COUNT = 0. This should perform much better.
\nSELECT COUNT(*)\nFROM log AS log_main\nWHERE log_main.status=1 \nAND NOT EXISTS\n ( SELECT 1\n FROM log AS log_inner\n WHERE log_inner.fingerprint_id=log_main.fingerprint_id\n AND log_inner.status = 0\n AND log_inner.date < log_main.date \n AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\n );\n
\nYou should also ensure the table is properly indexed.
\nEDIT
\nI believe using LEFT JOIN/IS NULL is more efficient in MySQL than using NOT EXISTS, so this will perform better than the above (although perhaps not significantly):
\nSELECT COUNT(*)\nFROM log AS log_main\n LEFT JOIN log AS log_inner\n ON log_inner.fingerprint_id=log_main.fingerprint_id\n AND log_inner.status = 0\n AND log_inner.date < log_main.date \n AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\nWHERE log_main.status = 1 \nAND Log_inner.fingerprint_id IS NULL;\n
\nEDIT 2
\nTo get records with 1 or 2 attempts etc I would still use a JOIN, but like so:
\nSELECT COUNT(*)\nFROM ( SELECT log_Main.id\n FROM log AS log_main\n INNER JOIN log AS log_inner\n ON log_inner.fingerprint_id=log_main.fingerprint_id\n AND log_inner.status = 0\n AND log_inner.date < log_main.date \n AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\n WHERE log_main.status = 1 \n AND Log_inner.fingerprint_id IS NULL\n GROUP BY log_Main.id\n HAVING COUNT(log_Inner.id) = 1\n ) d\n
\n
soup wrap:
Try using NOT EXISTS instead of COUNT = 0. This should perform much better.
SELECT COUNT(*)
FROM log AS log_main
WHERE log_main.status=1
AND NOT EXISTS
( SELECT 1
FROM log AS log_inner
WHERE log_inner.fingerprint_id=log_main.fingerprint_id
AND log_inner.status = 0
AND log_inner.date < log_main.date
AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
);
You should also ensure the table is properly indexed.
EDIT
I believe using LEFT JOIN/IS NULL is more efficient in MySQL than using NOT EXISTS, so this will perform better than the above (although perhaps not significantly):
SELECT COUNT(*)
FROM log AS log_main
LEFT JOIN log AS log_inner
ON log_inner.fingerprint_id=log_main.fingerprint_id
AND log_inner.status = 0
AND log_inner.date < log_main.date
AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
WHERE log_main.status = 1
AND Log_inner.fingerprint_id IS NULL;
EDIT 2
To get records with 1 or 2 attempts etc I would still use a JOIN, but like so:
SELECT COUNT(*)
FROM ( SELECT log_Main.id
FROM log AS log_main
INNER JOIN log AS log_inner
ON log_inner.fingerprint_id=log_main.fingerprint_id
AND log_inner.status = 0
AND log_inner.date < log_main.date
AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
WHERE log_main.status = 1
AND Log_inner.fingerprint_id IS NULL
GROUP BY log_Main.id
HAVING COUNT(log_Inner.id) = 1
) d
qid & accept id:
(14469652, 14469784)
query:
How to generate rows for date range by key
soup:
in 10g/11g you can use the model clause for this.
\nSQL> with emps as (select rownum id, name, start_date,\n 2 end_date, trunc(end_date)-trunc(start_date) date_range\n 3 from table1)\n 4 select name, the_date\n 5 from emps\n 6 model partition by(id as key)\n 7 dimension by(0 as f)\n 8 measures(name, start_date, cast(null as date) the_date, date_range)\n 9 rules (the_date [for f from 0 to date_range[0] increment 1] = start_date[0] + cv(f),\n 10 name[any] = name[0]);\n\nNAME THE_DATE\n----------- ----------\nDAVID SMITH 01-01-2001\nDAVID SMITH 01-02-2001\nDAVID SMITH 01-03-2001\nDAVID SMITH 01-04-2001\nDAVID SMITH 01-05-2001\nDAVID SMITH 01-06-2001\nJOHN SMITH 02-07-2012\nJOHN SMITH 02-08-2012\nJOHN SMITH 02-09-2012\n\n9 rows selected.\n
\nie your base query:
\nselect rownum id, name, start_date,\n end_date, trunc(end_date)-trunc(start_date) date_range\n from table1\n
\njust defines the dates + the range (I used rownum id, but if you have a PK you can use that instead.
\nthe partition splits our calculations per ID(unique row):
\n6 model partition by(id as key)\n
\nthe measures:
\n8 measures(name, start_date, cast(null as date) the_date, date_range)\n
\ndefines the attributes we will be outputting/calculating. in this case, we're working with name, and the start_date plus the range of rows to generate. additionally i've defined a column the_date that will hold the calculated date (i.e we want to caluclate start_date + n where n is from 0 to the range.
\nthe rules define HOW we are going to populate our columns:
\n9 rules (the_date [for f from 0 to date_range[0] increment 1] = start_date[0] + cv(f),\n10 name[any] = name[0]);\n
\nso with
\nthe_date [for f from 0 to date_range[0] increment 1]\n
\nwe are saying that we will generate the number of rows that date_range holds+1 (ie 6 dates in total). the value of f can be referenced through the cv(current value) function.
\nso on row 1 for david, we'd have the_date [0] = start_date+0 and subsequently on row 2, we'd have the_date [1] = start_date+1. all teh way up to start_date+5 (i.e the end_date)
\np.s. \nfor connect by you'd need to do something like this:
\nselect \n A.EMPLOYEE_NAME,\n A.START_DATE+(b.r-1) AS INDIVIDUAL_DAY,\n TO_CHAR(A.START_DATE,'MM/DD/YYYY') START_DATE,\n TO_CHAR(A.END_DATE,'MM/DD/YYYY') END_DATE\nFROM table1 A\n cross join (select rownum r\n from (select max(end_date-start_date) d from table1)\n connect by level-1 <= d) b\n where A.START_DATE+(b.r-1) <= A.END_DATE\n order by 1, 2;\n
\ni.e. isolate the connect by to a subquery, then filter out the rows where individual_day > end_date.
\nbut i WOULD NOT recommend this approach. its performance will be worse compared to the model approach (especially if the ranges get big).
\n
soup wrap:
in 10g/11g you can use the model clause for this.
SQL> with emps as (select rownum id, name, start_date,
2 end_date, trunc(end_date)-trunc(start_date) date_range
3 from table1)
4 select name, the_date
5 from emps
6 model partition by(id as key)
7 dimension by(0 as f)
8 measures(name, start_date, cast(null as date) the_date, date_range)
9 rules (the_date [for f from 0 to date_range[0] increment 1] = start_date[0] + cv(f),
10 name[any] = name[0]);
NAME THE_DATE
----------- ----------
DAVID SMITH 01-01-2001
DAVID SMITH 01-02-2001
DAVID SMITH 01-03-2001
DAVID SMITH 01-04-2001
DAVID SMITH 01-05-2001
DAVID SMITH 01-06-2001
JOHN SMITH 02-07-2012
JOHN SMITH 02-08-2012
JOHN SMITH 02-09-2012
9 rows selected.
ie your base query:
select rownum id, name, start_date,
end_date, trunc(end_date)-trunc(start_date) date_range
from table1
just defines the dates + the range (I used rownum id, but if you have a PK you can use that instead.
the partition splits our calculations per ID(unique row):
6 model partition by(id as key)
the measures:
8 measures(name, start_date, cast(null as date) the_date, date_range)
defines the attributes we will be outputting/calculating. in this case, we're working with name, and the start_date plus the range of rows to generate. additionally i've defined a column the_date that will hold the calculated date (i.e we want to caluclate start_date + n where n is from 0 to the range.
the rules define HOW we are going to populate our columns:
9 rules (the_date [for f from 0 to date_range[0] increment 1] = start_date[0] + cv(f),
10 name[any] = name[0]);
so with
the_date [for f from 0 to date_range[0] increment 1]
we are saying that we will generate the number of rows that date_range holds+1 (ie 6 dates in total). the value of f can be referenced through the cv(current value) function.
so on row 1 for david, we'd have the_date [0] = start_date+0 and subsequently on row 2, we'd have the_date [1] = start_date+1. all teh way up to start_date+5 (i.e the end_date)
p.s.
for connect by you'd need to do something like this:
select
A.EMPLOYEE_NAME,
A.START_DATE+(b.r-1) AS INDIVIDUAL_DAY,
TO_CHAR(A.START_DATE,'MM/DD/YYYY') START_DATE,
TO_CHAR(A.END_DATE,'MM/DD/YYYY') END_DATE
FROM table1 A
cross join (select rownum r
from (select max(end_date-start_date) d from table1)
connect by level-1 <= d) b
where A.START_DATE+(b.r-1) <= A.END_DATE
order by 1, 2;
i.e. isolate the connect by to a subquery, then filter out the rows where individual_day > end_date.
but i WOULD NOT recommend this approach. its performance will be worse compared to the model approach (especially if the ranges get big).
qid & accept id:
(14479213, 14479574)
query:
Join 2 rows in same table sql query
soup:
Try this (Assuming 'HE' has a space on either side);
\nselect name, count\nfrom yourTable where charindex(' he ',name)=0\nunion\nselect 'HE' name, sum(count) as count\nfrom yourTable where charindex(' he ',name)>0\n
\nAnother way is;
\nselect A.name, sum(A.count) as count\nfrom (\n select case charindex(' he ',name) \n when 0 then name else 'HE' end name, count\n from yourTable\n) A\ngroup by A.name\norder by A.name\n
\n
soup wrap:
Try this (Assuming 'HE' has a space on either side);
select name, count
from yourTable where charindex(' he ',name)=0
union
select 'HE' name, sum(count) as count
from yourTable where charindex(' he ',name)>0
Another way is;
select A.name, sum(A.count) as count
from (
select case charindex(' he ',name)
when 0 then name else 'HE' end name, count
from yourTable
) A
group by A.name
order by A.name
qid & accept id:
(14482625, 14482648)
query:
Display full column name instead of shortened
soup:
SQL*Plus will format the column width to the size of the datatype. in the case of DUAL, DUMMY is a varchar2(1). you can control this with
\ncol DUMMY format a5\n
\nie:
\nSQL> select * from dual;\n\nD\n-\nX\n\nSQL> col DUMMY format a5\nSQL> select * from dual;\n\nDUMMY\n-----\nX\n
\n
soup wrap:
SQL*Plus will format the column width to the size of the datatype. in the case of DUAL, DUMMY is a varchar2(1). you can control this with
col DUMMY format a5
ie:
SQL> select * from dual;
D
-
X
SQL> col DUMMY format a5
SQL> select * from dual;
DUMMY
-----
X
qid & accept id:
(14501440, 14501561)
query:
How to delete leading empty space in a SQL Database Table using MS SQL Server Managment Studio
soup:
This will remove leading and trailing spaces
\nUpdate tablename set fieldName = ltrim(rtrim(fieldName));\n
\nsome versions of SQL Support
\nUpdate tablename set fieldName = trim(fieldName);\n
\nIf you just want to remove leading
\nupdate tablename set fieldName = LTRIM(fieldName);\n
\n
soup wrap:
This will remove leading and trailing spaces
Update tablename set fieldName = ltrim(rtrim(fieldName));
some versions of SQL Support
Update tablename set fieldName = trim(fieldName);
If you just want to remove leading
update tablename set fieldName = LTRIM(fieldName);
qid & accept id:
(14513314, 14513873)
query:
if statement using a query in sql
soup:
(1) Using a statement block
\nIF \n(SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5\nBEGIN\n PRINT 'There are 5 Touring-3000 bikes.'\nEND\nELSE \nBEGIN\n PRINT 'There are Less than 5 Touring-3000 bikes.'\nEND ;\n
\n(2) Calling stored procedures.
\nDECLARE @compareprice money, @cost money \nEXECUTE Production.uspGetList '%Bikes%', 700, \n @compareprice OUT, \n @cost OUTPUT\nIF @cost <= @compareprice \nBEGIN\n PRINT 'These products can be purchased for less than \n $'+RTRIM(CAST(@compareprice AS varchar(20)))+'.'\nEND\nELSE\n PRINT 'The prices for all products in this category exceed \n $'+ RTRIM(CAST(@compareprice AS varchar(20)))+'.'\n
\nMore Examples:
\n\n
soup wrap:
(1) Using a statement block
IF
(SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5
BEGIN
PRINT 'There are 5 Touring-3000 bikes.'
END
ELSE
BEGIN
PRINT 'There are Less than 5 Touring-3000 bikes.'
END ;
(2) Calling stored procedures.
DECLARE @compareprice money, @cost money
EXECUTE Production.uspGetList '%Bikes%', 700,
@compareprice OUT,
@cost OUTPUT
IF @cost <= @compareprice
BEGIN
PRINT 'These products can be purchased for less than
$'+RTRIM(CAST(@compareprice AS varchar(20)))+'.'
END
ELSE
PRINT 'The prices for all products in this category exceed
$'+ RTRIM(CAST(@compareprice AS varchar(20)))+'.'
More Examples:
qid & accept id:
(14537280, 14537430)
query:
SQL instead-of trigger
soup:
Something like this:
\nCREATE trigger update_LateRating_title INSTEAD OF UPDATE OF title ON LateRating\nBEGIN\n UPDATE Movie SET title = new.title WHERE movie.mID = old.mID;\nEND;\n
\nAs requested in the comment, here is a trigger to update only movies that have reviews greater than 2 in LateRating:
\nCREATE trigger update_LateRating_title INSTEAD OF \nUPDATE OF title ON LateRating\nBEGIN\n UPDATE Movie SET title = new.title \n WHERE movie.mID = old.mID \n AND movie.mID IN (SELECT mID FROM LateRating WHERE stars > 2);\nEND;\n
\n(There are different ways to interpret this later request. Should title updates be allowed for the movie which has more than 2 stars somewhere or only for the record actually having more than 2 stars? My code is for the former choice).
\n
soup wrap:
Something like this:
CREATE trigger update_LateRating_title INSTEAD OF UPDATE OF title ON LateRating
BEGIN
UPDATE Movie SET title = new.title WHERE movie.mID = old.mID;
END;
As requested in the comment, here is a trigger to update only movies that have reviews greater than 2 in LateRating:
CREATE trigger update_LateRating_title INSTEAD OF
UPDATE OF title ON LateRating
BEGIN
UPDATE Movie SET title = new.title
WHERE movie.mID = old.mID
AND movie.mID IN (SELECT mID FROM LateRating WHERE stars > 2);
END;
(There are different ways to interpret this later request. Should title updates be allowed for the movie which has more than 2 stars somewhere or only for the record actually having more than 2 stars? My code is for the former choice).
qid & accept id:
(14540736, 14541743)
query:
sql avoid cartesian product
soup:
So it looks like you want all records from each of tables that are identical, and then only those from each that are distinct. That means you need to UNION 3 sets of queries.
\nTry something like this:
\nSELECT t1.state, \n t1.lname, \n t1.fname, \n t1.network as t1Network, \n t2.network as t2Network\nFROM table1 t1 \n INNER JOIN table2 t2 \n ON t1.fname=t2.fname \n AND t1.lname=t2.lname \n AND t1.state=t2.state\n AND t1.network=t2.network\nUNION \nSELECT t1.state, \n t1.lname, \n t1.fname, \n t1.network as t1Network, \n t2.network as t2Network\nFROM table1 t1 \n LEFT JOIN table2 t2 \n ON t1.fname=t2.fname \n AND t1.lname=t2.lname \n AND t1.state=t2.state\n AND t1.network=t2.network\nWHERE t2.network IS NULL\nUNION \nSELECT t2.state, \n t2.lname, \n t2.fname, \n t1.network as t1Network, \n t2.network as t2Network\nFROM table2 t2 \n LEFT JOIN table1 t1\n ON t1.fname=t2.fname \n AND t1.lname=t2.lname \n AND t1.state=t2.state\n AND t1.network=t2.network\nWHERE t1.network IS NULL\n
\nThis should give you your desired results.
\nAnd here is the SQL Fiddle to confirm.
\n--EDIT
\nNot thinking today -- you don't really need that first query. You can remove the WHERE condition from the 2nd query and it works the same way. Tired :-)
\nHere is the updated query -- both should work just fine though, this is just easier to read:
\nSELECT t1.state, \n t1.lname, \n t1.fname, \n t1.network as t1Network, \n t2.network as t2Network\nFROM table1 t1 \n LEFT JOIN table2 t2 \n ON t1.fname=t2.fname \n AND t1.lname=t2.lname \n AND t1.state=t2.state\n AND t1.network=t2.network\nUNION \nSELECT t2.state, \n t2.lname, \n t2.fname, \n t1.network as t1Network, \n t2.network as t2Network\nFROM table2 t2 \n LEFT JOIN table1 t1\n ON t1.fname=t2.fname \n AND t1.lname=t2.lname \n AND t1.state=t2.state\n AND t1.network=t2.network\nWHERE t1.network IS NULL\n
\nAnd the updated fiddle.
\nBTW -- these should both work in MSAccess as it supports UNION.
\nGood luck.
\n
soup wrap:
So it looks like you want all records from each of tables that are identical, and then only those from each that are distinct. That means you need to UNION 3 sets of queries.
Try something like this:
SELECT t1.state,
t1.lname,
t1.fname,
t1.network as t1Network,
t2.network as t2Network
FROM table1 t1
INNER JOIN table2 t2
ON t1.fname=t2.fname
AND t1.lname=t2.lname
AND t1.state=t2.state
AND t1.network=t2.network
UNION
SELECT t1.state,
t1.lname,
t1.fname,
t1.network as t1Network,
t2.network as t2Network
FROM table1 t1
LEFT JOIN table2 t2
ON t1.fname=t2.fname
AND t1.lname=t2.lname
AND t1.state=t2.state
AND t1.network=t2.network
WHERE t2.network IS NULL
UNION
SELECT t2.state,
t2.lname,
t2.fname,
t1.network as t1Network,
t2.network as t2Network
FROM table2 t2
LEFT JOIN table1 t1
ON t1.fname=t2.fname
AND t1.lname=t2.lname
AND t1.state=t2.state
AND t1.network=t2.network
WHERE t1.network IS NULL
This should give you your desired results.
And here is the SQL Fiddle to confirm.
--EDIT
Not thinking today -- you don't really need that first query. You can remove the WHERE condition from the 2nd query and it works the same way. Tired :-)
Here is the updated query -- both should work just fine though, this is just easier to read:
SELECT t1.state,
t1.lname,
t1.fname,
t1.network as t1Network,
t2.network as t2Network
FROM table1 t1
LEFT JOIN table2 t2
ON t1.fname=t2.fname
AND t1.lname=t2.lname
AND t1.state=t2.state
AND t1.network=t2.network
UNION
SELECT t2.state,
t2.lname,
t2.fname,
t1.network as t1Network,
t2.network as t2Network
FROM table2 t2
LEFT JOIN table1 t1
ON t1.fname=t2.fname
AND t1.lname=t2.lname
AND t1.state=t2.state
AND t1.network=t2.network
WHERE t1.network IS NULL
And the updated fiddle.
BTW -- these should both work in MSAccess as it supports UNION.
Good luck.
qid & accept id:
(14540917, 14540967)
query:
How can I create multiple rows from a single row (sql server 2008)
soup:
I'm a little confused by your question, but it sounds like you're trying to make your Company_X_Sales table have 3 rows instead of 1, just with varying quantities? If so, something like this should work:
\nSELECT S.PO_Number, C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty\nFROM Company_X_Sales C\n JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No\n
\nHere is the SQL Fiddle.
\nThat will give you the 4 rows with the correct quantities. Then you can delete and reinsert accordingly.
\nTo get those rows into the table, you have a few options, but something like this should work:
\n--Flag the rows for deletion\nUPDATE Company_X_Sales SET Qty = -1 -- Or some arbitrary value that does not exist in the table\n\n--Insert new correct rows\nINSERT INTO Company_X_Sales \nSELECT C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty\nFROM Company_X_Sales C\n JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No\n\n--Cleanup flagged rows for deletion\nDELETE FROM Company_X_Sales WHERE Qty = -1\n
\nGood luck.
\n
soup wrap:
I'm a little confused by your question, but it sounds like you're trying to make your Company_X_Sales table have 3 rows instead of 1, just with varying quantities? If so, something like this should work:
SELECT S.PO_Number, C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty
FROM Company_X_Sales C
JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No
Here is the SQL Fiddle.
That will give you the 4 rows with the correct quantities. Then you can delete and reinsert accordingly.
To get those rows into the table, you have a few options, but something like this should work:
--Flag the rows for deletion
UPDATE Company_X_Sales SET Qty = -1 -- Or some arbitrary value that does not exist in the table
--Insert new correct rows
INSERT INTO Company_X_Sales
SELECT C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty
FROM Company_X_Sales C
JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No
--Cleanup flagged rows for deletion
DELETE FROM Company_X_Sales WHERE Qty = -1
Good luck.
qid & accept id:
(14565788, 14566013)
query:
How to group by month from Date field using sql
soup:
I would use this:
\nSELECT Closing_Date = DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), \n Category, \n COUNT(Status) TotalCount \nFROM MyTable\nWHERE Closing_Date >= '2012-02-01' \nAND Closing_Date <= '2012-12-31'\nAND Defect_Status1 IS NOT NULL\nGROUP BY DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), Category;\n
\nThis will group by the first of every month, so
\n`DATEADD(MONTH, DATEDIFF(MONTH, 0, '20130128'), 0)` \n
\nwill give '20130101'. I generally prefer this method as it keeps dates as dates.
\nAlternatively you could use something like this:
\nSELECT Closing_Year = DATEPART(YEAR, Closing_Date),\n Closing_Month = DATEPART(MONTH, Closing_Date),\n Category, \n COUNT(Status) TotalCount \nFROM MyTable\nWHERE Closing_Date >= '2012-02-01' \nAND Closing_Date <= '2012-12-31'\nAND Defect_Status1 IS NOT NULL\nGROUP BY DATEPART(YEAR, Closing_Date), DATEPART(MONTH, Closing_Date), Category;\n
\nIt really depends what your desired output is. (Closing Year is not necessary in your example, but if the date range crosses a year boundary it may be).
\n
soup wrap:
I would use this:
SELECT Closing_Date = DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0),
Category,
COUNT(Status) TotalCount
FROM MyTable
WHERE Closing_Date >= '2012-02-01'
AND Closing_Date <= '2012-12-31'
AND Defect_Status1 IS NOT NULL
GROUP BY DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), Category;
This will group by the first of every month, so
`DATEADD(MONTH, DATEDIFF(MONTH, 0, '20130128'), 0)`
will give '20130101'. I generally prefer this method as it keeps dates as dates.
Alternatively you could use something like this:
SELECT Closing_Year = DATEPART(YEAR, Closing_Date),
Closing_Month = DATEPART(MONTH, Closing_Date),
Category,
COUNT(Status) TotalCount
FROM MyTable
WHERE Closing_Date >= '2012-02-01'
AND Closing_Date <= '2012-12-31'
AND Defect_Status1 IS NOT NULL
GROUP BY DATEPART(YEAR, Closing_Date), DATEPART(MONTH, Closing_Date), Category;
It really depends what your desired output is. (Closing Year is not necessary in your example, but if the date range crosses a year boundary it may be).
qid & accept id:
(14610658, 14610810)
query:
Average or calculate average
soup:
You can do this in one step. A tested example may be found here: http://sqlfiddle.com/#!2/05760/12
\nSELECT \n COUNT(*) / \n COUNT(DISTINCT cast(`date` as date)) avg_posts_per_day\nFROM \n posts\n
\n
\nOr you can do this in two steps:
\n\n- get posts per day,
\n- average the result of step 1.
\n
\nA tested example may be found here: http://sqlfiddle.com/#!2/05760/4
\nSELECT \n AVG(posts_per_day) AS AVG_POSTS_PER_DAY\nFROM ( \n SELECT \n CAST(`date` as date), \n COUNT(*) posts_per_day\n FROM posts \n GROUP BY \n CAST(`date` as date)\n) ppd\n
\n
soup wrap:
You can do this in one step. A tested example may be found here: http://sqlfiddle.com/#!2/05760/12
SELECT
COUNT(*) /
COUNT(DISTINCT cast(`date` as date)) avg_posts_per_day
FROM
posts
Or you can do this in two steps:
- get posts per day,
- average the result of step 1.
A tested example may be found here: http://sqlfiddle.com/#!2/05760/4
SELECT
AVG(posts_per_day) AS AVG_POSTS_PER_DAY
FROM (
SELECT
CAST(`date` as date),
COUNT(*) posts_per_day
FROM posts
GROUP BY
CAST(`date` as date)
) ppd
qid & accept id:
(14636287, 14697299)
query:
Convert local datetime from xml to datetime in sql
soup:
declare @XMLData xml = '\n\n 0008E02B66DD_ \n 03.20 \n 2 \n 0001-01-01T00:00:00 \n \n 99 \n 2012-02-03T13:00:00+13:00 \n \n \n ';\n\nselect T.N.value('substring((RecordedDate/text())[1], 1, 19)', 'datetime'),\n T.N.value('(RecordedDate/text())[1]', 'datetime'),\n T.N.value('(RecordedDate/text())[1]', 'datetimeoffset')\nfrom @XMLData.nodes('/Upload/Sessions') as T(N);\n
\nResult:
\n2012-02-03 13:00:00.000 \n2012-02-03 00:00:00.000 \n2012-02-03 13:00:00.0000000 +13:00\n
\n
soup wrap:
declare @XMLData xml = '
0008E02B66DD_
03.20
2
0001-01-01T00:00:00
99
2012-02-03T13:00:00+13:00
';
select T.N.value('substring((RecordedDate/text())[1], 1, 19)', 'datetime'),
T.N.value('(RecordedDate/text())[1]', 'datetime'),
T.N.value('(RecordedDate/text())[1]', 'datetimeoffset')
from @XMLData.nodes('/Upload/Sessions') as T(N);
Result:
2012-02-03 13:00:00.000
2012-02-03 00:00:00.000
2012-02-03 13:00:00.0000000 +13:00
qid & accept id:
(14636901, 14637065)
query:
PostgreSQL ORDER BY with VIEWs
soup:
This is possible if you use row_number() over().
\nHere is an example:
\nSELECT\n p.*\n ,h.address\n ,h.appraisal\nFROM (SELECT *, row_number() over() rn FROM people) p\nLEFT JOIN homes h\n ON h.person_id = p.person_id\nORDER BY p.rn, h.appraisal;\n
\nAnd here is the SQL Fiddle you can test with.
\nAs @Erwin Brandstetter correctly points out, using rank() will produce the correct results and allow for sorting on additional fields (in this case, appraisal).
\nSELECT\n p.*\n ,h.address\n ,h.appraisal\nFROM (SELECT *, rank() over() rn FROM people) p\nLEFT JOIN homes h\n ON h.person_id = p.person_id\nORDER BY p.rn, h.appraisal;\n
\nThink about it this way, using row_number(), it will always sort by that field only, regardless of any other sorting parameters. By using rank() where ties are the same, other fields can easily be search upon.
\nGood luck.
\n
soup wrap:
This is possible if you use row_number() over().
Here is an example:
SELECT
p.*
,h.address
,h.appraisal
FROM (SELECT *, row_number() over() rn FROM people) p
LEFT JOIN homes h
ON h.person_id = p.person_id
ORDER BY p.rn, h.appraisal;
And here is the SQL Fiddle you can test with.
As @Erwin Brandstetter correctly points out, using rank() will produce the correct results and allow for sorting on additional fields (in this case, appraisal).
SELECT
p.*
,h.address
,h.appraisal
FROM (SELECT *, rank() over() rn FROM people) p
LEFT JOIN homes h
ON h.person_id = p.person_id
ORDER BY p.rn, h.appraisal;
Think about it this way, using row_number(), it will always sort by that field only, regardless of any other sorting parameters. By using rank() where ties are the same, other fields can easily be search upon.
Good luck.
qid & accept id:
(14672688, 14672737)
query:
How to Update a MYSQL Column Based On Varying Conditions
soup:
I'll prefer to use CASE here.
\nUPDATE TAble1\nSET Result = CASE value\n WHEN 1 THEN x\n WHEN 2 THEN y\n ....\n ELSE z\n END\n
\nor
\nUPDATE TAble1\nSET Result = CASE \n WHEN value = 1 THEN x\n WHEN value = 2 THEN y\n ....\n ELSE z\n END\n
\n
soup wrap:
I'll prefer to use CASE here.
UPDATE TAble1
SET Result = CASE value
WHEN 1 THEN x
WHEN 2 THEN y
....
ELSE z
END
or
UPDATE TAble1
SET Result = CASE
WHEN value = 1 THEN x
WHEN value = 2 THEN y
....
ELSE z
END
qid & accept id:
(14675304, 14675363)
query:
How to get (One Before Last) row in SQL Server 2005
soup:
In SQL, tables are inherently unordered. So, let me assume that you have a column that specifies the ordering -- an id column, a date time, or something like that.
\nThe following does what you want:
\nselect top 4 *\nfrom (select top 5 *\n from Article a\n order by id desc\n ) a\norder by id asc\n
\nIf for some reason you don't have an id, you can take your chances with the following query:
\nselect a.*\nfrom (select a.*, row_number() over (order by (select NULL)) as seqnum,\n count(*) over () as totcnt\n from Article a\n ) a\nwhere seqnum between totcnt - 5 and totcnt - 1\n
\nI want to emphasize that this is not guaranteed to work. In my experience, I have seen that definition of seqnum assign sequential number to rows in order. BUT THIS IS NOT GUARANTEED TO WORK, and will probably not work in a multi-threaded environment. But, you might get lucky (particularly if your rows fit on one data page).
\nBy the way, you can use the same idea with a real column:
\nselect a.*\nfrom (select a.*, row_number() over (order by id) as seqnum,\n count(*) over () as totcnt\n from Article a\n ) a\nwhere seqnum between totcnt - 5 and totcnt - 1\n
\n
soup wrap:
In SQL, tables are inherently unordered. So, let me assume that you have a column that specifies the ordering -- an id column, a date time, or something like that.
The following does what you want:
select top 4 *
from (select top 5 *
from Article a
order by id desc
) a
order by id asc
If for some reason you don't have an id, you can take your chances with the following query:
select a.*
from (select a.*, row_number() over (order by (select NULL)) as seqnum,
count(*) over () as totcnt
from Article a
) a
where seqnum between totcnt - 5 and totcnt - 1
I want to emphasize that this is not guaranteed to work. In my experience, I have seen that definition of seqnum assign sequential number to rows in order. BUT THIS IS NOT GUARANTEED TO WORK, and will probably not work in a multi-threaded environment. But, you might get lucky (particularly if your rows fit on one data page).
By the way, you can use the same idea with a real column:
select a.*
from (select a.*, row_number() over (order by id) as seqnum,
count(*) over () as totcnt
from Article a
) a
where seqnum between totcnt - 5 and totcnt - 1
qid & accept id:
(14699703, 14700329)
query:
Loop through all rows and concat unique values in SQL table
soup:
You could concat as string aggregation using the format for your Table1,
\nSELECT col1,\n col2,\n col3,\n listagg(col4, ',') within GROUP(\nORDER BY col4) AS col4\nFROM agg_test\nGROUP BY col1,\n col2,\n col3;\n
\nYou could get the result as:
\ncol1 col2 col3 col4\n______________________________________ \nval1 val2 val3 val4,val5,val6\nvalx valy valz val4,val5\n
\n
soup wrap:
You could concat as string aggregation using the format for your Table1,
SELECT col1,
col2,
col3,
listagg(col4, ',') within GROUP(
ORDER BY col4) AS col4
FROM agg_test
GROUP BY col1,
col2,
col3;
You could get the result as:
col1 col2 col3 col4
______________________________________
val1 val2 val3 val4,val5,val6
valx valy valz val4,val5
qid & accept id:
(14705215, 14705360)
query:
How to find rows in SQL / MySQL with ORDER BY
soup:
This will give current rank for user1:
\nSELECT count(*) AS rank\nFROM user\nWHERE poin >= (SELECT poin FROM user WHERE name = 'user1')\n
\nSmall issue with this query is that if another user has the same points, it will be assigned the same rank - whether it is correct, it is questionable.
\nIf you want to simply add rank for every user, use this:
\nSELECT\n @rank:=@rank+1 AS rank,\n name,\n poin\nFROM user,\n (SELECT @rank:=0) r\nORDER BY poin DESC\n
\nYou can use small variation of this query to get rank of single user, but avoid issue of the same ranking ambiguity:
\nSELECT *\nFROM (\n SELECT\n @rank:=@rank+1 AS rank,\n name,\n poin\n FROM user,\n (SELECT @rank:=0) r\n ORDER BY poin DESC\n) x\nWHERE name = 'user1'\n
\n
soup wrap:
This will give current rank for user1:
SELECT count(*) AS rank
FROM user
WHERE poin >= (SELECT poin FROM user WHERE name = 'user1')
Small issue with this query is that if another user has the same points, it will be assigned the same rank - whether it is correct, it is questionable.
If you want to simply add rank for every user, use this:
SELECT
@rank:=@rank+1 AS rank,
name,
poin
FROM user,
(SELECT @rank:=0) r
ORDER BY poin DESC
You can use small variation of this query to get rank of single user, but avoid issue of the same ranking ambiguity:
SELECT *
FROM (
SELECT
@rank:=@rank+1 AS rank,
name,
poin
FROM user,
(SELECT @rank:=0) r
ORDER BY poin DESC
) x
WHERE name = 'user1'
qid & accept id:
(14730469, 14730620)
query:
Row data to column
soup:
\nMS SQL Server 2008 Schema Setup:
\ncreate table tblFile\n(\n FileName varchar(10),\n FileLocation varchar(30)\n)\n\ninsert into tblFile values\n('file1', '\\server1\folder1\file1'),\n('file1', '\\server2\folder1\file1'),\n('file2', '\\server1\folder1\file2'),\n('file2', '\\server2\folder1\file2')\n
\nQuery 1:
\nselect T1.FileName,\n (\n select ', '+T2.FileLocation\n from tblFile as T2\n where T1.FileName = T2.FileName\n for xml path(''), type\n ).value('substring(text()[1], 3)', 'varchar(max)') as FileLocations\nfrom tblFile as T1\ngroup by T1.FileName\n
\n\n| FILENAME | FILELOCATIONS |\n---------------------------------------------------------------\n| file1 | \\server1\folder1\file1, \\server2\folder1\file1 |\n| file2 | \\server1\folder1\file2, \\server2\folder1\file2 |\n
\n
soup wrap:
MS SQL Server 2008 Schema Setup:
create table tblFile
(
FileName varchar(10),
FileLocation varchar(30)
)
insert into tblFile values
('file1', '\\server1\folder1\file1'),
('file1', '\\server2\folder1\file1'),
('file2', '\\server1\folder1\file2'),
('file2', '\\server2\folder1\file2')
Query 1:
select T1.FileName,
(
select ', '+T2.FileLocation
from tblFile as T2
where T1.FileName = T2.FileName
for xml path(''), type
).value('substring(text()[1], 3)', 'varchar(max)') as FileLocations
from tblFile as T1
group by T1.FileName
| FILENAME | FILELOCATIONS |
---------------------------------------------------------------
| file1 | \\server1\folder1\file1, \\server2\folder1\file1 |
| file2 | \\server1\folder1\file2, \\server2\folder1\file2 |
qid & accept id:
(14732938, 14732970)
query:
Pivot on a single table
soup:
This type of data transformation is known as a PIVOT. Starting in SQL Server 2005 there is a function that can perform this data rotation for you. But this can be done many different ways.
\nYou can use an aggregate function and a CASE to pivot the data:
\nselect\n name,\n max(case when date = '2013-04-01' then city end) [City 04/01/2013],\n max(case when date = '2013-05-01' then city end) [City 05/01/2013]\nfrom yourtable\ngroup by name\n
\n\nOr you can use the PIVOT function:
\nselect name, [2013-04-01] as [City 04/01/2013], [2013-05-01] as [City 05/01/2013]\nfrom\n(\n select name, city, date\n from yourtable\n) src\npivot\n(\n max(city)\n for date in ([2013-04-01], [2013-05-01])\n) piv\n
\nSee SQL Fiddle with Demo.
\nThis can even be done by joining on your table multiple times:
\nselect d1.name,\n d1.city [City 04/01/2013], \n d2.city [City 05/01/2013]\nfrom yourtable d1\nleft join yourtable d2\n on d1.name = d2.name\n and d2.date = '2013-05-01'\nwhere d1.date = '2013-04-01'\n
\nSee SQL Fiddle with Demo.
\nThe above queries will work great if you have known dates that you want to transform into columns. But if you have an unknown number of columns, then you will want to use dynamic sql:
\nDECLARE @cols AS NVARCHAR(MAX),\n @colNames AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) \n from yourtable\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nselect @colNames = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) +' as '+ QUOTENAME('City '+convert(char(10), date, 120))\n from yourtable\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nset @query = 'SELECT name, ' + @colNames + ' from \n (\n select name, \n city, \n convert(char(10), date, 120) date\n from yourtable\n ) x\n pivot \n (\n max(city)\n for date in (' + @cols + ')\n ) p '\n\nexecute(@query)\n
\n\nAll of them give the result:
\n| NAME | CITY 04/01/2013 | CITY 05/01/2013 |\n----------------------------------------------\n| Paul | Milan | Berlin |\n| Charls | Rome | El Cairo |\n| Jim | Tokyo | Milan |\n| Justin | San Francisco | Paris |\n| Bill | London | Madrid |\n
\n
soup wrap:
This type of data transformation is known as a PIVOT. Starting in SQL Server 2005 there is a function that can perform this data rotation for you. But this can be done many different ways.
You can use an aggregate function and a CASE to pivot the data:
select
name,
max(case when date = '2013-04-01' then city end) [City 04/01/2013],
max(case when date = '2013-05-01' then city end) [City 05/01/2013]
from yourtable
group by name
Or you can use the PIVOT function:
select name, [2013-04-01] as [City 04/01/2013], [2013-05-01] as [City 05/01/2013]
from
(
select name, city, date
from yourtable
) src
pivot
(
max(city)
for date in ([2013-04-01], [2013-05-01])
) piv
See SQL Fiddle with Demo.
This can even be done by joining on your table multiple times:
select d1.name,
d1.city [City 04/01/2013],
d2.city [City 05/01/2013]
from yourtable d1
left join yourtable d2
on d1.name = d2.name
and d2.date = '2013-05-01'
where d1.date = '2013-04-01'
See SQL Fiddle with Demo.
The above queries will work great if you have known dates that you want to transform into columns. But if you have an unknown number of columns, then you will want to use dynamic sql:
DECLARE @cols AS NVARCHAR(MAX),
@colNames AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120))
from yourtable
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
select @colNames = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) +' as '+ QUOTENAME('City '+convert(char(10), date, 120))
from yourtable
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT name, ' + @colNames + ' from
(
select name,
city,
convert(char(10), date, 120) date
from yourtable
) x
pivot
(
max(city)
for date in (' + @cols + ')
) p '
execute(@query)
All of them give the result:
| NAME | CITY 04/01/2013 | CITY 05/01/2013 |
----------------------------------------------
| Paul | Milan | Berlin |
| Charls | Rome | El Cairo |
| Jim | Tokyo | Milan |
| Justin | San Francisco | Paris |
| Bill | London | Madrid |
qid & accept id:
(14746540, 14781686)
query:
How to select from table where the table name is a local variable(informix)
soup:
Assuming you have a recent enough version of Informix (11.70), you should be able to use Dynamic SQL in SPL like this:
\nBEGIN;\n\nCREATE TABLE rnmtask\n(\n pdf_column VARCHAR(32) NOT NULL,\n table_name VARCHAR(32) NOT NULL,\n task_code INTEGER NOT NULL PRIMARY KEY\n);\n\nINSERT INTO rnmtask VALUES("symbol", "elements", 1);\nINSERT INTO rnmtask VALUES("name", "elements", 2);\nINSERT INTO rnmtask VALUES("atomic_number", "elements", 3);\n\nCREATE PROCEDURE rmg_request_file(al_task_code INTEGER)\n RETURNING VARCHAR(255) AS colval;\n\n DEFINE ll_pdf_column VARCHAR(50);\n DEFINE ll_tb_name VARCHAR(60);\n DEFINE stmt VARCHAR(255);\n DEFINE result VARCHAR(255);\n\n SELECT pdf_column, table_name\n INTO ll_pdf_column, ll_tb_name\n FROM rnmtask\n WHERE task_code = al_task_code;\n\n LET stmt = "SELECT " || ll_pdf_column || " FROM " || ll_tb_name;\n PREPARE p FROM stmt;\n DECLARE C CURSOR FOR p;\n OPEN C;\n WHILE sqlcode = 0\n FETCH C INTO result;\n IF sqlcode != 0 THEN\n EXIT WHILE;\n END IF;\n RETURN result WITH RESUME;\n END WHILE;\n\n CLOSE C;\n FREE C;\n FREE p;\n\nEND PROCEDURE;\n\nEXECUTE PROCEDURE rmg_request_file(1);\nEXECUTE PROCEDURE rmg_request_file(2);\nEXECUTE PROCEDURE rmg_request_file(3);\n\nROLLBACK;\n
\nThis assumes you have a convenient Table of Elements in your database:
\nCREATE TABLE elements\n(\n atomic_number INTEGER NOT NULL PRIMARY KEY CONSTRAINT c1_elements\n CHECK (atomic_number > 0 AND atomic_number < 120),\n symbol CHAR(3) NOT NULL UNIQUE CONSTRAINT c2_elements,\n name CHAR(20) NOT NULL UNIQUE CONSTRAINT c3_elements,\n atomic_weight DECIMAL(8, 4) NOT NULL,\n period SMALLINT NOT NULL\n CHECK (period BETWEEN 1 AND 7),\n group CHAR(2) NOT NULL\n -- 'L' for Lanthanoids, 'A' for Actinoids\n CHECK (group IN ('1', '2', 'L', 'A', '3', '4', '5', '6',\n '7', '8', '9', '10', '11', '12', '13',\n '14', '15', '16', '17', '18')),\n stable CHAR(1) DEFAULT 'Y' NOT NULL\n CHECK (stable IN ('Y', 'N'))\n);\n\nINSERT INTO elements VALUES( 1, 'H', 'Hydrogen', 1.0079, 1, '1', 'Y');\nINSERT INTO elements VALUES( 2, 'He', 'Helium', 4.0026, 1, '18', 'Y');\nINSERT INTO elements VALUES( 3, 'Li', 'Lithium', 6.9410, 2, '1', 'Y');\nINSERT INTO elements VALUES( 4, 'Be', 'Beryllium', 9.0122, 2, '2', 'Y');\nINSERT INTO elements VALUES( 5, 'B', 'Boron', 10.8110, 2, '13', 'Y');\nINSERT INTO elements VALUES( 6, 'C', 'Carbon', 12.0110, 2, '14', 'Y');\nINSERT INTO elements VALUES( 7, 'N', 'Nitrogen', 14.0070, 2, '15', 'Y');\nINSERT INTO elements VALUES( 8, 'O', 'Oxygen', 15.9990, 2, '16', 'Y');\nINSERT INTO elements VALUES( 9, 'F', 'Fluorine', 18.9980, 2, '17', 'Y');\nINSERT INTO elements VALUES( 10, 'Ne', 'Neon', 20.1800, 2, '18', 'Y');\nINSERT INTO elements VALUES( 11, 'Na', 'Sodium', 22.9900, 3, '1', 'Y');\nINSERT INTO elements VALUES( 12, 'Mg', 'Magnesium', 24.3050, 3, '2', 'Y');\nINSERT INTO elements VALUES( 13, 'Al', 'Aluminium', 26.9820, 3, '13', 'Y');\nINSERT INTO elements VALUES( 14, 'Si', 'Silicon', 28.0860, 3, '14', 'Y');\nINSERT INTO elements VALUES( 15, 'P', 'Phosphorus', 30.9740, 3, '15', 'Y');\nINSERT INTO elements VALUES( 16, 'S', 'Sulphur', 32.0650, 3, '16', 'Y');\nINSERT INTO elements VALUES( 17, 'Cl', 'Chlorine', 35.4530, 3, '17', 'Y');\nINSERT INTO elements VALUES( 18, 'Ar', 'Argon', 39.9480, 3, '18', 'Y');\n
\n
soup wrap:
Assuming you have a recent enough version of Informix (11.70), you should be able to use Dynamic SQL in SPL like this:
BEGIN;
CREATE TABLE rnmtask
(
pdf_column VARCHAR(32) NOT NULL,
table_name VARCHAR(32) NOT NULL,
task_code INTEGER NOT NULL PRIMARY KEY
);
INSERT INTO rnmtask VALUES("symbol", "elements", 1);
INSERT INTO rnmtask VALUES("name", "elements", 2);
INSERT INTO rnmtask VALUES("atomic_number", "elements", 3);
CREATE PROCEDURE rmg_request_file(al_task_code INTEGER)
RETURNING VARCHAR(255) AS colval;
DEFINE ll_pdf_column VARCHAR(50);
DEFINE ll_tb_name VARCHAR(60);
DEFINE stmt VARCHAR(255);
DEFINE result VARCHAR(255);
SELECT pdf_column, table_name
INTO ll_pdf_column, ll_tb_name
FROM rnmtask
WHERE task_code = al_task_code;
LET stmt = "SELECT " || ll_pdf_column || " FROM " || ll_tb_name;
PREPARE p FROM stmt;
DECLARE C CURSOR FOR p;
OPEN C;
WHILE sqlcode = 0
FETCH C INTO result;
IF sqlcode != 0 THEN
EXIT WHILE;
END IF;
RETURN result WITH RESUME;
END WHILE;
CLOSE C;
FREE C;
FREE p;
END PROCEDURE;
EXECUTE PROCEDURE rmg_request_file(1);
EXECUTE PROCEDURE rmg_request_file(2);
EXECUTE PROCEDURE rmg_request_file(3);
ROLLBACK;
This assumes you have a convenient Table of Elements in your database:
CREATE TABLE elements
(
atomic_number INTEGER NOT NULL PRIMARY KEY CONSTRAINT c1_elements
CHECK (atomic_number > 0 AND atomic_number < 120),
symbol CHAR(3) NOT NULL UNIQUE CONSTRAINT c2_elements,
name CHAR(20) NOT NULL UNIQUE CONSTRAINT c3_elements,
atomic_weight DECIMAL(8, 4) NOT NULL,
period SMALLINT NOT NULL
CHECK (period BETWEEN 1 AND 7),
group CHAR(2) NOT NULL
-- 'L' for Lanthanoids, 'A' for Actinoids
CHECK (group IN ('1', '2', 'L', 'A', '3', '4', '5', '6',
'7', '8', '9', '10', '11', '12', '13',
'14', '15', '16', '17', '18')),
stable CHAR(1) DEFAULT 'Y' NOT NULL
CHECK (stable IN ('Y', 'N'))
);
INSERT INTO elements VALUES( 1, 'H', 'Hydrogen', 1.0079, 1, '1', 'Y');
INSERT INTO elements VALUES( 2, 'He', 'Helium', 4.0026, 1, '18', 'Y');
INSERT INTO elements VALUES( 3, 'Li', 'Lithium', 6.9410, 2, '1', 'Y');
INSERT INTO elements VALUES( 4, 'Be', 'Beryllium', 9.0122, 2, '2', 'Y');
INSERT INTO elements VALUES( 5, 'B', 'Boron', 10.8110, 2, '13', 'Y');
INSERT INTO elements VALUES( 6, 'C', 'Carbon', 12.0110, 2, '14', 'Y');
INSERT INTO elements VALUES( 7, 'N', 'Nitrogen', 14.0070, 2, '15', 'Y');
INSERT INTO elements VALUES( 8, 'O', 'Oxygen', 15.9990, 2, '16', 'Y');
INSERT INTO elements VALUES( 9, 'F', 'Fluorine', 18.9980, 2, '17', 'Y');
INSERT INTO elements VALUES( 10, 'Ne', 'Neon', 20.1800, 2, '18', 'Y');
INSERT INTO elements VALUES( 11, 'Na', 'Sodium', 22.9900, 3, '1', 'Y');
INSERT INTO elements VALUES( 12, 'Mg', 'Magnesium', 24.3050, 3, '2', 'Y');
INSERT INTO elements VALUES( 13, 'Al', 'Aluminium', 26.9820, 3, '13', 'Y');
INSERT INTO elements VALUES( 14, 'Si', 'Silicon', 28.0860, 3, '14', 'Y');
INSERT INTO elements VALUES( 15, 'P', 'Phosphorus', 30.9740, 3, '15', 'Y');
INSERT INTO elements VALUES( 16, 'S', 'Sulphur', 32.0650, 3, '16', 'Y');
INSERT INTO elements VALUES( 17, 'Cl', 'Chlorine', 35.4530, 3, '17', 'Y');
INSERT INTO elements VALUES( 18, 'Ar', 'Argon', 39.9480, 3, '18', 'Y');
qid & accept id:
(14790098, 14790136)
query:
MySQL - SUM of a group of time differences
soup:
Select SEC_TO_TIME(SUM(TIME_TO_SEC(timediff(timeOut, timeIn)))) AS totalhours\nFROM volHours \nWHERE username = 'skolcz'\n
\nIf not then maybe:
\nSelect SEC_TO_TIME(SELECT SUM(TIME_TO_SEC(timediff(timeOut, timeIn))) \nFROM volHours \nWHERE username = 'skolcz') as totalhours\n
\n
soup wrap:
Select SEC_TO_TIME(SUM(TIME_TO_SEC(timediff(timeOut, timeIn)))) AS totalhours
FROM volHours
WHERE username = 'skolcz'
If not then maybe:
Select SEC_TO_TIME(SELECT SUM(TIME_TO_SEC(timediff(timeOut, timeIn)))
FROM volHours
WHERE username = 'skolcz') as totalhours
qid & accept id:
(14792677, 14792810)
query:
MySQL upsert with extra check
soup:
Simple. Don't use VALUES() (you're already doing it to refer to the existing value of check_status):
\nINSERT INTO some_table ('description', 'comment', 'some_unique_key')\nVALUES ('some description', 'some comment', 32)\nON DUPLICATE KEY UPDATE\ndescription = IF(check_status = 1, description, 'some description')\ncomment = IF(check_status = 1, comment, 'some comment')\n
\nOr use it to set the new content rather than repeating yourself:
\nINSERT INTO some_table ('description', 'comment', 'some_unique_key')\nVALUES ('some description', 'some comment', 32)\nON DUPLICATE KEY UPDATE\ndescription = IF(check_status = 1, description, VALUES(description))\ncomment = IF(check_status = 1, comment, VALUES(comment))\n
\n
soup wrap:
Simple. Don't use VALUES() (you're already doing it to refer to the existing value of check_status):
INSERT INTO some_table ('description', 'comment', 'some_unique_key')
VALUES ('some description', 'some comment', 32)
ON DUPLICATE KEY UPDATE
description = IF(check_status = 1, description, 'some description')
comment = IF(check_status = 1, comment, 'some comment')
Or use it to set the new content rather than repeating yourself:
INSERT INTO some_table ('description', 'comment', 'some_unique_key')
VALUES ('some description', 'some comment', 32)
ON DUPLICATE KEY UPDATE
description = IF(check_status = 1, description, VALUES(description))
comment = IF(check_status = 1, comment, VALUES(comment))
qid & accept id:
(14830410, 14830905)
query:
Multiple Table Joins with WHERE clause
soup:
It seems like the following query is what you need. Notice that the filter for memberid = 200 has been moved to the join condition:
\nselect s.section_id,\n s.title,\n s.description,\n m.status\nfrom Sections s\nleft join SectionMembers sm\n on s.section_id = sm.section_id\n and sm.memberid = 200\nleft join MemberStatus m\n on sm.status_code = m.status_code\nwhere s.section_ownerid = 100;\n
\nNote: while your desired result shows that section_id=2 has a status of ActiveMember there is no way in your sample data to make this value link to section 2.
\nThis query gives the result:
\n| SECTION_ID | TITLE | DESCRIPTION | STATUS |\n------------------------------------------------------\n| 1 | title1 | desc1 | PendingMember |\n| 2 | title2 | desc2 | MemberRejected |\n| 3 | title3 | desc3 | MemberRejected |\n| 4 | title4 | desc4 | ActiveMember |\n| 5 | title5 | desc5 | (null) |\n| 6 | title6 | desc6 | (null) |\n
\n
soup wrap:
It seems like the following query is what you need. Notice that the filter for memberid = 200 has been moved to the join condition:
select s.section_id,
s.title,
s.description,
m.status
from Sections s
left join SectionMembers sm
on s.section_id = sm.section_id
and sm.memberid = 200
left join MemberStatus m
on sm.status_code = m.status_code
where s.section_ownerid = 100;
Note: while your desired result shows that section_id=2 has a status of ActiveMember there is no way in your sample data to make this value link to section 2.
This query gives the result:
| SECTION_ID | TITLE | DESCRIPTION | STATUS |
------------------------------------------------------
| 1 | title1 | desc1 | PendingMember |
| 2 | title2 | desc2 | MemberRejected |
| 3 | title3 | desc3 | MemberRejected |
| 4 | title4 | desc4 | ActiveMember |
| 5 | title5 | desc5 | (null) |
| 6 | title6 | desc6 | (null) |
qid & accept id:
(14838374, 14838929)
query:
TSQL dynamic filters on one column
soup:
I'm making 4 assumptions here:
\n\n- You have SQL-Server 2008 or later (tag is only sql-server)
\n- Your criteria will always be in the format
name = Y and value >=10 and value <= 25 \n- Your values column is actually an int column (based on your where\nclause)
\n- Your separate criteria should be separated by OR not and (since in\nyour example you have
WHERE (Name = 'x' ..) AND (Name = 'y'...)\nwhich will never evaluate to true) \n
\nAssuming the above is true then you can use table valued parameters. The first step would be to create your parameter:
\nCREATE TYPE dbo.TableFilter AS TABLE \n( Name VARCHAR(50), \n LowerValue INT, \n UpperValue INT\n);\n
\nThen you can create a procedure to get your filtered results
\nCREATE PROCEDURE dbo.CustomTableFilter @Filter dbo.TableFilter READONLY\nAS\n SELECT T.*\n FROM T\n WHERE EXISTS\n ( SELECT 1\n FROM @Filter f\n WHERE T.Name = f.Name\n AND T.Value >= f.LowerValue \n AND T.Value <= f.UpperValue\n )\n
\nThen you can call your procedure using something like:
\nDECLARE @Filter dbo.TableFilter;\nINSERT @Filter VALUES ('X', 1, 5), ('Y', 10, 25);\n\nEXECUTE dbo.CustomTableFilter @Filter;\n
\n\n
soup wrap:
I'm making 4 assumptions here:
- You have SQL-Server 2008 or later (tag is only sql-server)
- Your criteria will always be in the format
name = Y and value >=10 and value <= 25
- Your values column is actually an int column (based on your where
clause)
- Your separate criteria should be separated by OR not and (since in
your example you have
WHERE (Name = 'x' ..) AND (Name = 'y'...)
which will never evaluate to true)
Assuming the above is true then you can use table valued parameters. The first step would be to create your parameter:
CREATE TYPE dbo.TableFilter AS TABLE
( Name VARCHAR(50),
LowerValue INT,
UpperValue INT
);
Then you can create a procedure to get your filtered results
CREATE PROCEDURE dbo.CustomTableFilter @Filter dbo.TableFilter READONLY
AS
SELECT T.*
FROM T
WHERE EXISTS
( SELECT 1
FROM @Filter f
WHERE T.Name = f.Name
AND T.Value >= f.LowerValue
AND T.Value <= f.UpperValue
)
Then you can call your procedure using something like:
DECLARE @Filter dbo.TableFilter;
INSERT @Filter VALUES ('X', 1, 5), ('Y', 10, 25);
EXECUTE dbo.CustomTableFilter @Filter;
qid & accept id:
(14841239, 14842425)
query:
sql normalize a table
soup:
If I understand you correctly, you're working with columns that contain multiple, delimited values (like the PICK database) :
\n`For multiple parts, this character | is added and the structure is repeated.`\n
\nTypically, in a normalized database, one would have:
\nUNIT (something that might need service or repair)\nUnitId PK\nUnitDescription\n\nPARTS (repair / replacement parts)\nPartId PK\nPartDescription\n\nUNIT_SERVICES (instances of repair visits/ service)\nServiceID int primary key\nUnitId foreign key references UNIT\nServiceDate\nTechnicianID\netc\n\n\nSERVICE_PART (part used in the service)\nID primary key\nServiceID foreign key references SERVICE\nPartID foreign key references PART\nQuantity\n
\nThere could be zero, one, or multiple UNIT_SERVICES associated with a UNIT.\nThere could be zero, one, or multiple SERVICE_PARTS associated with a SERVICE.
\nIn a normalized database, each part used in the servicing of a unit would exist in its own row in the SERVICE_PART table. We would not find two or more parts in the same SERVICE_PART tuple, separated by some delimiter, as was commonly done in so-called multivalued databases, which were precursors to the modern OODBMS.
\n
soup wrap:
If I understand you correctly, you're working with columns that contain multiple, delimited values (like the PICK database) :
`For multiple parts, this character | is added and the structure is repeated.`
Typically, in a normalized database, one would have:
UNIT (something that might need service or repair)
UnitId PK
UnitDescription
PARTS (repair / replacement parts)
PartId PK
PartDescription
UNIT_SERVICES (instances of repair visits/ service)
ServiceID int primary key
UnitId foreign key references UNIT
ServiceDate
TechnicianID
etc
SERVICE_PART (part used in the service)
ID primary key
ServiceID foreign key references SERVICE
PartID foreign key references PART
Quantity
There could be zero, one, or multiple UNIT_SERVICES associated with a UNIT.
There could be zero, one, or multiple SERVICE_PARTS associated with a SERVICE.
In a normalized database, each part used in the servicing of a unit would exist in its own row in the SERVICE_PART table. We would not find two or more parts in the same SERVICE_PART tuple, separated by some delimiter, as was commonly done in so-called multivalued databases, which were precursors to the modern OODBMS.
qid & accept id:
(14849316, 14849699)
query:
How to fetch consecutive pairs of records in Oracle
soup:
Like other commenters I'm not entirely sure I follow, but if you only want to look at IDs 4 and 5 and want to match them up in date order, you can do something like this:
\nwith t as (\n select id, dt, row_number() over (partition by id order by dt) as rn\n from t42\n where id in (4, 5)\n)\nselect t4.id as id4, t4.dt as date4, t5.id as id5, t5.dt as date5,\n case t4.rn when 1 then 'First' when 2 then 'Second' when 3 then 'Third' end\n || ' set of 4 and 5' as "Comment"\nfrom t t4\njoin t t5 on t5.rn = t4.rn\nwhere t4.id = 4\nand t5.id = 5\norder by t4.rn;\n\n ID4 DATE4 ID5 DATE5 Comment \n---------- --------- ---------- --------- ---------------------\n 4 02-JAN-13 5 05-JAN-13 First set of 4 and 5 \n 4 08-JAN-13 5 12-JAN-13 Second set of 4 and 5 \n
\nI'm not sure now if you actually want the 'comment' to be returned/displayed... probably not, which would simplify it slightly.
\n
\nFor modified requirements:
\nwith t as (\n select id, dt, lead(dt) over (partition by id order by dt) as next_dt\n from t42\n where id in (4, 5)\n)\nselect t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5\nfrom t t4\njoin t t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)\nwhere t4.id = 4\nand t5.id = 5\ngroup by t4.id, t4.dt, t5.id\norder by t4.dt;\n\n ID4 DATE4 ID5 DATE5 \n---------- --------------------- ---------- ---------------------\n 4 16.03.2012 17:49:28 5 10.05.2012 09:38:56 \n 4 12.06.2012 08:47:52 5 02.08.2012 11:27:43 \n 4 03.08.2012 13:24:54 5 03.08.2012 14:14:07 \n
\nThe CTE uses LEAD to peek at the next date for each ID, which is only really relevant for when ID is 4; and that can be null if there isn't an extra ID 4 without matches at the end. The join then only looks for ID 5 records that fall between two ID 4 dates (or after the last ID 4 date). If you want the alternate (later) ID 5 date in the first result just use MAX instead of MIN. (I'm not 100% about the > and <= matching; I've tried to interpret what you said, but you might need to tweak that if it isn't quite right).
\n
\nTo work around what appears to be a 9i bug (probably fixed in 9.2.0.3 or 9.2.0.6 according to MOS, but depends exectly which bug you're hitting):
\nselect t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5\nfrom (\n select id, dt, lead(dt) over (partition by id order by dt) as next_dt\n from t42\n where id = 4\n) t4\njoin (select id, dt\n from t42\n where id = 5\n) t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)\ngroup by t4.id, t4.dt, t5.id\norder by t4.dt;\n
\nI don't have an old enough version to test this against unfortunately. You don't have to use the t5 subselect, you could just join your main table straight to t4, but I think this is a little clearer.
\n
soup wrap:
Like other commenters I'm not entirely sure I follow, but if you only want to look at IDs 4 and 5 and want to match them up in date order, you can do something like this:
with t as (
select id, dt, row_number() over (partition by id order by dt) as rn
from t42
where id in (4, 5)
)
select t4.id as id4, t4.dt as date4, t5.id as id5, t5.dt as date5,
case t4.rn when 1 then 'First' when 2 then 'Second' when 3 then 'Third' end
|| ' set of 4 and 5' as "Comment"
from t t4
join t t5 on t5.rn = t4.rn
where t4.id = 4
and t5.id = 5
order by t4.rn;
ID4 DATE4 ID5 DATE5 Comment
---------- --------- ---------- --------- ---------------------
4 02-JAN-13 5 05-JAN-13 First set of 4 and 5
4 08-JAN-13 5 12-JAN-13 Second set of 4 and 5
I'm not sure now if you actually want the 'comment' to be returned/displayed... probably not, which would simplify it slightly.
For modified requirements:
with t as (
select id, dt, lead(dt) over (partition by id order by dt) as next_dt
from t42
where id in (4, 5)
)
select t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5
from t t4
join t t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)
where t4.id = 4
and t5.id = 5
group by t4.id, t4.dt, t5.id
order by t4.dt;
ID4 DATE4 ID5 DATE5
---------- --------------------- ---------- ---------------------
4 16.03.2012 17:49:28 5 10.05.2012 09:38:56
4 12.06.2012 08:47:52 5 02.08.2012 11:27:43
4 03.08.2012 13:24:54 5 03.08.2012 14:14:07
The CTE uses LEAD to peek at the next date for each ID, which is only really relevant for when ID is 4; and that can be null if there isn't an extra ID 4 without matches at the end. The join then only looks for ID 5 records that fall between two ID 4 dates (or after the last ID 4 date). If you want the alternate (later) ID 5 date in the first result just use MAX instead of MIN. (I'm not 100% about the > and <= matching; I've tried to interpret what you said, but you might need to tweak that if it isn't quite right).
To work around what appears to be a 9i bug (probably fixed in 9.2.0.3 or 9.2.0.6 according to MOS, but depends exectly which bug you're hitting):
select t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5
from (
select id, dt, lead(dt) over (partition by id order by dt) as next_dt
from t42
where id = 4
) t4
join (select id, dt
from t42
where id = 5
) t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)
group by t4.id, t4.dt, t5.id
order by t4.dt;
I don't have an old enough version to test this against unfortunately. You don't have to use the t5 subselect, you could just join your main table straight to t4, but I think this is a little clearer.
qid & accept id:
(14856663, 14857244)
query:
Datagrid textbox search C#
soup:
This will give you the gridview row index for the value:
\nString searchValue = "somestring";\nint rowIndex = -1;\nforeach(DataGridViewRow row in DataGridView1.Rows)\n{\n if(row.Cells[1].Value.ToString().Equals(searchValue))\n {\n rowIndex = row.Index;\n break;\n }\n}\n
\nOr a LINQ query
\n int rowIndex = -1;\n\n DataGridViewRow row = dgv.Rows\n .Cast()\n .Where(r => r.Cells["SystemId"].Value.ToString().Equals(searchValue))\n .First();\n\n rowIndex = row.Index;\n
\nthen you can do:
\n dataGridView1.Rows[rowIndex].Selected = true;\n
\n
soup wrap:
This will give you the gridview row index for the value:
String searchValue = "somestring";
int rowIndex = -1;
foreach(DataGridViewRow row in DataGridView1.Rows)
{
if(row.Cells[1].Value.ToString().Equals(searchValue))
{
rowIndex = row.Index;
break;
}
}
Or a LINQ query
int rowIndex = -1;
DataGridViewRow row = dgv.Rows
.Cast()
.Where(r => r.Cells["SystemId"].Value.ToString().Equals(searchValue))
.First();
rowIndex = row.Index;
then you can do:
dataGridView1.Rows[rowIndex].Selected = true;
qid & accept id:
(14860852, 14860906)
query:
Set column to automatically pull data from referenced table
soup:
You could create a view, a view is basically a SQL statement that is stored on the MySQL server and acts like a table
\nCREATE VIEW ViewName AS\nSELECT tbl1.data, tbl2.speeding\nFROM tbl1\nINNER JOIN tbl2 ON tbl2.key = tbl1.key;\n
\nhttp://dev.mysql.com/doc/refman/5.0/en/create-view.html
\nYou then use the view as you would use any table
\nSELECT data, speeding\nFROM ViewName\n
\n
soup wrap:
You could create a view, a view is basically a SQL statement that is stored on the MySQL server and acts like a table
CREATE VIEW ViewName AS
SELECT tbl1.data, tbl2.speeding
FROM tbl1
INNER JOIN tbl2 ON tbl2.key = tbl1.key;
http://dev.mysql.com/doc/refman/5.0/en/create-view.html
You then use the view as you would use any table
SELECT data, speeding
FROM ViewName
qid & accept id:
(14866797, 14866820)
query:
cartesian product - SUM two columns in the same table
soup:
you can use CASE on this,
\nSELECT SUM(arc_baseEventCount) 'total event count', \n SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'Connector Raw Event Statistics'\nFROM Events\n
\n\n- SQLFiddle Demo
\n
\nUPDATE 1
\nSELECT SUM(arc_baseEventCount) 'total event count', \n SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'total_1',\n SUM(CASE WHEN name = 'Connector Raw Event Statistics' THEN arc_deviceCustomNumber3 ELSE NULL END) 'total_2'\nFROM Events\n
\n
soup wrap:
you can use CASE on this,
SELECT SUM(arc_baseEventCount) 'total event count',
SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'Connector Raw Event Statistics'
FROM Events
UPDATE 1
SELECT SUM(arc_baseEventCount) 'total event count',
SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'total_1',
SUM(CASE WHEN name = 'Connector Raw Event Statistics' THEN arc_deviceCustomNumber3 ELSE NULL END) 'total_2'
FROM Events
qid & accept id:
(14903899, 14904003)
query:
sub query with comma delimited output in one column
soup:
You can use the following:
\nselect t1.col1,\n t1.col2, \n t1.col3,\n left(t2.col4, len(t2.col4)-1) col4\nfrom table1 t1\ncross apply\n(\n select cast(t2.Col4 as varchar(10)) + ', '\n from Table2 t2\n where t1.col1 = t2.col1\n FOR XML PATH('')\n) t2 (col4)\n
\nSee SQL Fiddle with Demo.
\nOr you can use:
\nselect t1.col1,\n t1.col2, \n t1.col3,\n STUFF(\n (SELECT ', ' + cast(t2.Col4 as varchar(10))\n FROM Table2 t2\n where t1.col1 = t2.col1\n FOR XML PATH (''))\n , 1, 1, '') AS col4\nfrom table1 t1\n
\n\n
soup wrap:
You can use the following:
select t1.col1,
t1.col2,
t1.col3,
left(t2.col4, len(t2.col4)-1) col4
from table1 t1
cross apply
(
select cast(t2.Col4 as varchar(10)) + ', '
from Table2 t2
where t1.col1 = t2.col1
FOR XML PATH('')
) t2 (col4)
See SQL Fiddle with Demo.
Or you can use:
select t1.col1,
t1.col2,
t1.col3,
STUFF(
(SELECT ', ' + cast(t2.Col4 as varchar(10))
FROM Table2 t2
where t1.col1 = t2.col1
FOR XML PATH (''))
, 1, 1, '') AS col4
from table1 t1
qid & accept id:
(14930630, 14930654)
query:
How to select attributes in a relational database where I have to check multiple attributes?
soup:
You are missing the FROM clause, and the string literals must be in '' instead of double quotes. If the age is of data type numeric, remove the quotes around it, if not use ''. Something like:
\nSelect person1.*\nFROM person1\nwhere person1.age = 42 \n and person1.job = 'bng' \n and person1.gender = 'f';\n
\n\nThis should give you the row:
\n| PERSON1 | AGE | JOB | GENDER |\n--------------------------------\n| p2 | 42 | bng | f |\n
\n
soup wrap:
You are missing the FROM clause, and the string literals must be in '' instead of double quotes. If the age is of data type numeric, remove the quotes around it, if not use ''. Something like:
Select person1.*
FROM person1
where person1.age = 42
and person1.job = 'bng'
and person1.gender = 'f';
This should give you the row:
| PERSON1 | AGE | JOB | GENDER |
--------------------------------
| p2 | 42 | bng | f |
qid & accept id:
(14952911, 14953106)
query:
Postgres: How to create reference cell?
soup:
RDBMS uses a different approach. There are queries and data. When you query something, it is natural to perform extra calculations on the data. In your case this is a simple arythmetic function.
\nSay, you have a table:\n
\nCREATE TABLE tab (\n id integer PRIMARY KEY,\n a1 integer\n);\n
\nNow, to achieve your case you can do the following:\n
\nSELECT id,\n a1,\n a1+1 AS a2\n FROM tab;\n
\nAs you can see, I'm using existing columns in the formula and assign the result a new alias a2.
\nI really recommend you to read the Tutorial and SQL Basics from the official PostgreSQL documentation, along with some SQL introduction book.
\n
soup wrap:
RDBMS uses a different approach. There are queries and data. When you query something, it is natural to perform extra calculations on the data. In your case this is a simple arythmetic function.
Say, you have a table:
CREATE TABLE tab (
id integer PRIMARY KEY,
a1 integer
);
Now, to achieve your case you can do the following:
SELECT id,
a1,
a1+1 AS a2
FROM tab;
As you can see, I'm using existing columns in the formula and assign the result a new alias a2.
I really recommend you to read the Tutorial and SQL Basics from the official PostgreSQL documentation, along with some SQL introduction book.
qid & accept id:
(14961787, 14962034)
query:
SQL Server 2005: Insert one to many (1 Order-Many Charges) results into @table
soup:
This should work:
\nSELECT O.OrderId, C.ChargeId\nFROM Orders O\n JOIN Charges C ON O.CustomerId = C.CustomerId AND\n (C.ProductId = O.ProductId OR C.ProductId = 0)\nORDER BY O.OrderId, C.ChargeId\n
\nHere is the sample Fiddle.
\nAnd it produces these results:
\nORDERID CHARGEID\n1 1\n1 2\n2 3\n2 4\n3 5\n4 1\n
\n
soup wrap:
This should work:
SELECT O.OrderId, C.ChargeId
FROM Orders O
JOIN Charges C ON O.CustomerId = C.CustomerId AND
(C.ProductId = O.ProductId OR C.ProductId = 0)
ORDER BY O.OrderId, C.ChargeId
Here is the sample Fiddle.
And it produces these results:
ORDERID CHARGEID
1 1
1 2
2 3
2 4
3 5
4 1
qid & accept id:
(14964462, 14966382)
query:
JPA Query for toggling a boolean in a UPDATE
soup:
That can be done with the case expression:
\nUPDATE FOO a \nSET a.bar = \n CASE a.bar \n WHEN TRUE THEN FALSE\n ELSE TRUE END\nWHERE a.id in :ids\n
\nFor nullable Boolean bit more is needed:
\nUPDATE FOO a \nSET a.bar = \n CASE a.bar \n WHEN TRUE THEN FALSE\n WHEN FALSE THEN TRUE\n ELSE a.bar END\nWHERE a.id in :ids\n
\n
soup wrap:
That can be done with the case expression:
UPDATE FOO a
SET a.bar =
CASE a.bar
WHEN TRUE THEN FALSE
ELSE TRUE END
WHERE a.id in :ids
For nullable Boolean bit more is needed:
UPDATE FOO a
SET a.bar =
CASE a.bar
WHEN TRUE THEN FALSE
WHEN FALSE THEN TRUE
ELSE a.bar END
WHERE a.id in :ids
qid & accept id:
(14965566, 14966674)
query:
How to copy in field if query returns it blank?
soup:
Making some assumptions about your data as in comments, particularly about how to match and pick a substitute name value; and with some dummy data that I think matches yours:
\ncreate table tablea(out_num number,\n equip_name varchar2(5),\n event_type varchar2(10),\n comments varchar2(10),\n timestamp date, feed_id number);\n\ncreate table tableb(id number, name varchar2(10));\n\nalter session set nls_date_format = 'MM/DD/YYYY HH24:MI';\n\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:12'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);\ninsert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:11'), 2);\ninsert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:03'), 3);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:03'), 5);\n\ninsert into tableb values(3, 'LION');\n
\nThis gets your result:
\nselect * from (\n select a.out_num,\n a.timestamp,\n a.equip_name,\n a.event_type,\n a.comments,\n coalesce(b.name,\n first_value(b.name)\n over (partition by a.out_num\n order by b.name nulls last)) as name\n from tablea a\n left outer join tableb b on a.feed_id = b.id\n where a.out_num = '12345'\n and a.event_type in ('CAUSE', 'STATUS', 'XYZ')\n)\nwhere event_type in ('CAUSE', 'STATUS');\n\n OUT_NUM TIMESTAMP EQUIP_NAME EVENT_TYPE COMMENTS NAME \n---------- ------------------ ---------- ---------- ---------- ----------\n 12345 02/11/2013 11:03 STATUS BOOKS LION \n 12345 02/11/2013 11:13 STATUS BOOKS LION \n 12345 02/11/2013 11:13 STATUS BOOKS LION \n 12345 02/11/2013 11:13 CAUSE APPLE LION \n 12345 02/11/2013 11:13 CAUSE APPLE LION \n 12345 02/11/2013 11:13 CAUSE APPLE LION \n
\nThe inner query includes XYZ and uses the analytic first_value() function to pick a name if the directly matched value is null - the coalesce may not be necessary if there really will never be a direct match. (You might also need to adjust the partition by or order by clauses if the assumptions are wrong). The outer query just strips out the XYZ records since you don't want those.
\n
\nIf you want to get a name value from any matching record then just remove the filter in the inner query.
\nBut now you're perhaps more likely to have more than one non-null record; this will give you one that matches a.feed_id if it exists, or the 'first' one (alphabetically, ish) for that out_num if it doesn't. You could order by b.id instead, or any other column in tableb; ordering by anything in tablea would need a different solution. If you'll only have one possible match anyway then it doesn't really matter and you can leave out the order by, though it's better to have it anyway.
\nIf I add some more data for a different out_num:
\ninsert into tablea values (12346, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);\ninsert into tablea values (12346, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);\ninsert into tablea values (12346, null, 'XYZ', null, to_date('02/11/2013 11:13'), 6);\ninsert into tablea values (12346, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:14'), 4);\ninsert into tablea values (12346, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:15'), 5);\n\ninsert into tableb values(1, 'TIGER');\n
\n...then this - which just has the filter dropped, and I've left out the coalesce this time - gives the same answer for 12345, and this for 12346:
\nselect * from (\n select a.out_num,\n a.timestamp,\n a.equip_name,\n a.event_type,\n a.comments,\n first_value(b.name)\n over (partition by a.out_num\n order by b.name nulls last) as name\n from tablea a\n left outer join tableb b on a.feed_id = b.id\n)\nwhere out_num = '12346'\nand event_type in ('CAUSE', 'STATUS');\n\n OUT_NUM TIMESTAMP EQUIP_NAME EVENT_TYPE COMMENTS NAME \n---------- ------------------ ---------- ---------- ---------- ----------\n 12346 02/11/2013 11:14 CAUSE APPLE TIGER \n 12346 02/11/2013 11:15 STATUS BOOKS TIGER \n
\n... where TIGER is linked to abcd, not XYZ.
\n
soup wrap:
Making some assumptions about your data as in comments, particularly about how to match and pick a substitute name value; and with some dummy data that I think matches yours:
create table tablea(out_num number,
equip_name varchar2(5),
event_type varchar2(10),
comments varchar2(10),
timestamp date, feed_id number);
create table tableb(id number, name varchar2(10));
alter session set nls_date_format = 'MM/DD/YYYY HH24:MI';
insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:12'), 1);
insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);
insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);
insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);
insert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:11'), 2);
insert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:03'), 3);
insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);
insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);
insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:03'), 5);
insert into tableb values(3, 'LION');
This gets your result:
select * from (
select a.out_num,
a.timestamp,
a.equip_name,
a.event_type,
a.comments,
coalesce(b.name,
first_value(b.name)
over (partition by a.out_num
order by b.name nulls last)) as name
from tablea a
left outer join tableb b on a.feed_id = b.id
where a.out_num = '12345'
and a.event_type in ('CAUSE', 'STATUS', 'XYZ')
)
where event_type in ('CAUSE', 'STATUS');
OUT_NUM TIMESTAMP EQUIP_NAME EVENT_TYPE COMMENTS NAME
---------- ------------------ ---------- ---------- ---------- ----------
12345 02/11/2013 11:03 STATUS BOOKS LION
12345 02/11/2013 11:13 STATUS BOOKS LION
12345 02/11/2013 11:13 STATUS BOOKS LION
12345 02/11/2013 11:13 CAUSE APPLE LION
12345 02/11/2013 11:13 CAUSE APPLE LION
12345 02/11/2013 11:13 CAUSE APPLE LION
The inner query includes XYZ and uses the analytic first_value() function to pick a name if the directly matched value is null - the coalesce may not be necessary if there really will never be a direct match. (You might also need to adjust the partition by or order by clauses if the assumptions are wrong). The outer query just strips out the XYZ records since you don't want those.
If you want to get a name value from any matching record then just remove the filter in the inner query.
But now you're perhaps more likely to have more than one non-null record; this will give you one that matches a.feed_id if it exists, or the 'first' one (alphabetically, ish) for that out_num if it doesn't. You could order by b.id instead, or any other column in tableb; ordering by anything in tablea would need a different solution. If you'll only have one possible match anyway then it doesn't really matter and you can leave out the order by, though it's better to have it anyway.
If I add some more data for a different out_num:
insert into tablea values (12346, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);
insert into tablea values (12346, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);
insert into tablea values (12346, null, 'XYZ', null, to_date('02/11/2013 11:13'), 6);
insert into tablea values (12346, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:14'), 4);
insert into tablea values (12346, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:15'), 5);
insert into tableb values(1, 'TIGER');
...then this - which just has the filter dropped, and I've left out the coalesce this time - gives the same answer for 12345, and this for 12346:
select * from (
select a.out_num,
a.timestamp,
a.equip_name,
a.event_type,
a.comments,
first_value(b.name)
over (partition by a.out_num
order by b.name nulls last) as name
from tablea a
left outer join tableb b on a.feed_id = b.id
)
where out_num = '12346'
and event_type in ('CAUSE', 'STATUS');
OUT_NUM TIMESTAMP EQUIP_NAME EVENT_TYPE COMMENTS NAME
---------- ------------------ ---------- ---------- ---------- ----------
12346 02/11/2013 11:14 CAUSE APPLE TIGER
12346 02/11/2013 11:15 STATUS BOOKS TIGER
... where TIGER is linked to abcd, not XYZ.
qid & accept id:
(15002034, 15002117)
query:
SQL to group more than one records of a joined table?
soup:
In MySQL you will want to use the GROUP_CONCAT() function which will concatenate the multiple rows into a single row. Since this is an aggregate function, you will also use a GROUP BY clause on the query:
\nselect p.id,\n p.name,\n group_concat(c.id order by c.id) ChildrenIds,\n group_concat(c.name order by c.id) ChildrenNames\nfrom parent p\nleft join children c\n on p.id = c.parent_id\ngroup by p.id, p.name\n
\nSee SQL Fiddle with Demo.
\nThe result is:
\n| ID | NAME | CHILDRENIDS | CHILDRENNAMES |\n------------------------------------------------------------------\n| 1 | Parent 1 | 1,2 | Child P1 1,Child P1 2 |\n| 2 | Parent 2 | 3,4,5 | Child P2 1,Child P2 2,Child P2 3 |\n
\n
soup wrap:
In MySQL you will want to use the GROUP_CONCAT() function which will concatenate the multiple rows into a single row. Since this is an aggregate function, you will also use a GROUP BY clause on the query:
select p.id,
p.name,
group_concat(c.id order by c.id) ChildrenIds,
group_concat(c.name order by c.id) ChildrenNames
from parent p
left join children c
on p.id = c.parent_id
group by p.id, p.name
See SQL Fiddle with Demo.
The result is:
| ID | NAME | CHILDRENIDS | CHILDRENNAMES |
------------------------------------------------------------------
| 1 | Parent 1 | 1,2 | Child P1 1,Child P1 2 |
| 2 | Parent 2 | 3,4,5 | Child P2 1,Child P2 2,Child P2 3 |
qid & accept id:
(15034144, 15034170)
query:
Is it possible to join two tables of multiple rows by only the first ID in each table?
soup:
You can do this by using row_number() to create a fake join column:
\nselect coalesce(a.id, b.id) as id, a.colors, b.states\nfrom (select a.*, row_number() over (order by id) as seqnum\n from a\n ) a full outer join\n (select b.*, row_number() over (order by id) as seqnum\n from b\n ) b\n on b.seqnum = a.seqnum\n
\nActually, in Oracle, you can also just use rownum:
\nselect coalesce(a.id, b.id) as id, a.colors, b.states\nfrom (select a.*, rownum as seqnum\n from a\n ) a full outer join\n (select b.*, rownum as seqnum\n from b\n ) b\n on b.seqnum = a.seqnum\n
\n
soup wrap:
You can do this by using row_number() to create a fake join column:
select coalesce(a.id, b.id) as id, a.colors, b.states
from (select a.*, row_number() over (order by id) as seqnum
from a
) a full outer join
(select b.*, row_number() over (order by id) as seqnum
from b
) b
on b.seqnum = a.seqnum
Actually, in Oracle, you can also just use rownum:
select coalesce(a.id, b.id) as id, a.colors, b.states
from (select a.*, rownum as seqnum
from a
) a full outer join
(select b.*, rownum as seqnum
from b
) b
on b.seqnum = a.seqnum
qid & accept id:
(15066914, 15129850)
query:
SQL return Information by not existing rows
soup:
Hope to explain it now more clearly:
\nThe original code what I have now is:
\nselect distinct username, name, surname\nfrom users u, accounts a\nwhere u.user_nr = a.user_nr\nand username in (\n'existing_user',\n'not_existing_user'\n) order by username;\n
\nand it gives me:
\nUSERNAME NAME SURNAME \n------------------------- --------------- ---------------\nexisting_user Hello All\n\n1 row selected.\n
\nand I need:
\nUSERNAME NAME SURNAME \n------------------------- --------------- ---------------\nexisting_user Hello All\nnot_existing_user Not Exists Not Exists\n\n2 row selected.\n
\nThe Problem: the user not_existing_user is not existing in the DataBase, \nbut the query has to show him anyway from the code\nwith the Info - User not in the DB. \nFor 500 Users I can not check everyone separate :/
\n
soup wrap:
Hope to explain it now more clearly:
The original code what I have now is:
select distinct username, name, surname
from users u, accounts a
where u.user_nr = a.user_nr
and username in (
'existing_user',
'not_existing_user'
) order by username;
and it gives me:
USERNAME NAME SURNAME
------------------------- --------------- ---------------
existing_user Hello All
1 row selected.
and I need:
USERNAME NAME SURNAME
------------------------- --------------- ---------------
existing_user Hello All
not_existing_user Not Exists Not Exists
2 row selected.
The Problem: the user not_existing_user is not existing in the DataBase,
but the query has to show him anyway from the code
with the Info - User not in the DB.
For 500 Users I can not check everyone separate :/
qid & accept id:
(15100101, 15100456)
query:
UNPIVOT on an indeterminate number of columns
soup:
It sounds like you want to unpivot the table (pivoting would involve going from many rows and 2 columns to 1 row with many columns). You would most likely need to use dynamic SQL to generate the query and then use the DBMS_SQL package (or potentially EXECUTE IMMEDIATE) to execute it. You should also be able to construct a pipelined table function that did the unpivoting. You'd need to use dynamic SQL within the pipelined table function as well but it would potentially be less code. I'd expect a pure dynamic SQL statement using UNPIVOT to be more efficient, though.
\nAn inefficient approach, but one that is relatively easy to follow, would be something like
\nSQL> ed\nWrote file afiedt.buf\n\n 1 create or replace type emp_unpivot_type\n 2 as object (\n 3 empno number,\n 4 col varchar2(4000)\n 5* );\nSQL> /\n\nType created.\n\nSQL> create or replace type emp_unpivot_tbl\n 2 as table of emp_unpivot_type;\n 3 /\n\nType created.\n\nSQL> ed\nWrote file afiedt.buf\n\n 1 create or replace function unpivot_emp\n 2 ( p_empno in number )\n 3 return emp_unpivot_tbl\n 4 pipelined\n 5 is\n 6 l_val varchar2(4000);\n 7 begin\n 8 for cols in (select column_name from user_tab_columns where table_name = 'EMP')\n 9 loop\n 10 execute immediate 'select ' || cols.column_name || ' from emp where empno = :empno'\n 11 into l_val\n 12 using p_empno;\n 13 pipe row( emp_unpivot_type( p_empno, l_val ));\n 14 end loop;\n 15 return;\n 16* end;\nSQL> /\n\nFunction created.\n
\nYou can then call that in a SQL statement (I would think that you'd want at least a third column with the column name)
\nSQL> ed\nWrote file afiedt.buf\n\n 1 select *\n 2* from table( unpivot_emp( 7934 ))\nSQL> /\n\n EMPNO COL\n---------- ----------------------------------------\n 7934 7934\n 7934 MILLER\n 7934 CLERK\n 7934 7782\n 7934 23-JAN-82\n 7934 1301\n 7934\n 7934 10\n\n8 rows selected.\n
\nA more efficient approach would be to adapt Tom Kyte's show_table pipelined table function.
\n
soup wrap:
It sounds like you want to unpivot the table (pivoting would involve going from many rows and 2 columns to 1 row with many columns). You would most likely need to use dynamic SQL to generate the query and then use the DBMS_SQL package (or potentially EXECUTE IMMEDIATE) to execute it. You should also be able to construct a pipelined table function that did the unpivoting. You'd need to use dynamic SQL within the pipelined table function as well but it would potentially be less code. I'd expect a pure dynamic SQL statement using UNPIVOT to be more efficient, though.
An inefficient approach, but one that is relatively easy to follow, would be something like
SQL> ed
Wrote file afiedt.buf
1 create or replace type emp_unpivot_type
2 as object (
3 empno number,
4 col varchar2(4000)
5* );
SQL> /
Type created.
SQL> create or replace type emp_unpivot_tbl
2 as table of emp_unpivot_type;
3 /
Type created.
SQL> ed
Wrote file afiedt.buf
1 create or replace function unpivot_emp
2 ( p_empno in number )
3 return emp_unpivot_tbl
4 pipelined
5 is
6 l_val varchar2(4000);
7 begin
8 for cols in (select column_name from user_tab_columns where table_name = 'EMP')
9 loop
10 execute immediate 'select ' || cols.column_name || ' from emp where empno = :empno'
11 into l_val
12 using p_empno;
13 pipe row( emp_unpivot_type( p_empno, l_val ));
14 end loop;
15 return;
16* end;
SQL> /
Function created.
You can then call that in a SQL statement (I would think that you'd want at least a third column with the column name)
SQL> ed
Wrote file afiedt.buf
1 select *
2* from table( unpivot_emp( 7934 ))
SQL> /
EMPNO COL
---------- ----------------------------------------
7934 7934
7934 MILLER
7934 CLERK
7934 7782
7934 23-JAN-82
7934 1301
7934
7934 10
8 rows selected.
A more efficient approach would be to adapt Tom Kyte's show_table pipelined table function.
qid & accept id:
(15108987, 15115503)
query:
Mysql get a number of before and afer rows
soup:
This is really easy with union. Try this:
\n(select t.* from t where t.col <= YOURNAME\n order by t.col desc\n limit 6\n)\nunion all\n(select t.* from t where t.col > YOURNAME\n order by t.col\n limit 5\n)\norder by t.col\n
\nThe first part of the query returns the five before. The second returns the five after.
\nBy the way, if you have duplicates, you might want this instead:
\n(select t.* from t where t.col = YOURNAME)\nunion all\n(select t.* from t where t.col < YOURNAME\n order by t.col desc\n limit 5\n)\nunion all\n(select t.* from t where t.col > YOURNAME\n order by t.col\n limit 5\n)\norder by t.col\n
\n
soup wrap:
This is really easy with union. Try this:
(select t.* from t where t.col <= YOURNAME
order by t.col desc
limit 6
)
union all
(select t.* from t where t.col > YOURNAME
order by t.col
limit 5
)
order by t.col
The first part of the query returns the five before. The second returns the five after.
By the way, if you have duplicates, you might want this instead:
(select t.* from t where t.col = YOURNAME)
union all
(select t.* from t where t.col < YOURNAME
order by t.col desc
limit 5
)
union all
(select t.* from t where t.col > YOURNAME
order by t.col
limit 5
)
order by t.col
qid & accept id:
(15117826, 15118331)
query:
selecting multiple counts when tables not directly co-relate
soup:
You need to do your counts in subqueries, or count distinct, as your multiple 1 to many relationships are causing cross joining. I don't know your data but imagine this scenario:
\nUsers:
\nUser_ID | Source_ID\n--------+--------------\n 1 | 1 \n
\nWhite_Rules
\nVictim_ID | Rule_ID\n----------+-------------\n 1 | 1\n 1 | 2\n
\nBlack_Rules
\nVictim_ID | Rule_ID\n----------+-------------\n 1 | 3\n 1 | 4\n
\nIf you run
\nSELECT Users.User_ID, \n Users.Source_ID, \n White_Rules.Rule_ID AS WhiteRuleID, \n Black_Rules.Rule_ID AS BlackRuleID\nFROM Users\n LEFT JOIN White_Rules\n ON White_Rules.Victim_ID = Users.User_ID\n LEFT JOIN Black_Rules\n ON Black_Rules.Victim_ID = Users.User_ID\n
\nYou will get all combinations of White_Rules.Rule_ID and Black_Rules.Rule_ID:
\nUser_ID | Source_ID | WhiteRuleID | BlackRuleID\n--------+-----------+-------------+-------------\n 1 | 1 | 1 | 3\n 1 | 1 | 2 | 4\n 1 | 1 | 1 | 3\n 1 | 1 | 2 | 4\n
\nSo counting the results will return 4 white rules and 4 black rules, even though there are only 2 of each.
\nYou should get the required results if you change your query to this:
\nSELECT Users.Source_ID,\n SUM(COALESCE(w.TotalWhite, 0)) AS TotalWhite,\n SUM(COALESCE(b.TotalBlack, 0)) AS TotalBlack,\n SUM(COALESCE(g.TotalGeneral, 0)) AS TotalGeneral\nFROM Users\n LEFT JOIN\n ( SELECT Victim_ID, COUNT(*) AS TotalWhite\n FROM White_Rules\n GROUP BY Victim_ID\n ) w\n ON w.Victim_ID = Users.User_ID\n LEFT JOIN\n ( SELECT Victim_ID, COUNT(*) AS TotalBlack\n FROM Black_Rules\n GROUP BY Victim_ID\n ) b\n ON b.Victim_ID = Users.User_ID\n LEFT JOIN\n ( SELECT Victim_ID, COUNT(*) AS TotalGeneral\n FROM General_Rules\n GROUP BY Victim_ID\n ) g\n ON g.Victim_ID = Users.User_ID\nWHERE Deleted = 'f'\nAND Source IS NOT NULL\nGROUP BY Users.Source_ID\n
\n\nAn alternative would be:
\nSELECT Users.Source_ID,\n COUNT(Rules.TotalWhite) AS TotalWhite,\n COUNT(Rules.TotalBlack) AS TotalBlack,\n COUNT(Rules.TotalGeneral) AS TotalGeneral\nFROM Users\n LEFT JOIN\n ( SELECT Victim_ID, 1 AS TotalWhite, NULL AS TotalBlack, NULL AS TotalGeneral\n FROM White_Rules\n UNION ALL\n SELECT Victim_ID, NULL AS TotalWhite, 1 AS TotalBlack, NULL AS TotalGeneral\n FROM Black_Rules\n UNION ALL\n SELECT Victim_ID, NULL AS TotalWhite, NULL AS TotalBlack, 1 AS TotalGeneral\n FROM General_Rules\n ) Rules\n ON Rules.Victim_ID = Users.User_ID\nWHERE Deleted = 'f'\nAND Source IS NOT NULL\nGROUP BY Users.Source_ID\n
\n\n
soup wrap:
You need to do your counts in subqueries, or count distinct, as your multiple 1 to many relationships are causing cross joining. I don't know your data but imagine this scenario:
Users:
User_ID | Source_ID
--------+--------------
1 | 1
White_Rules
Victim_ID | Rule_ID
----------+-------------
1 | 1
1 | 2
Black_Rules
Victim_ID | Rule_ID
----------+-------------
1 | 3
1 | 4
If you run
SELECT Users.User_ID,
Users.Source_ID,
White_Rules.Rule_ID AS WhiteRuleID,
Black_Rules.Rule_ID AS BlackRuleID
FROM Users
LEFT JOIN White_Rules
ON White_Rules.Victim_ID = Users.User_ID
LEFT JOIN Black_Rules
ON Black_Rules.Victim_ID = Users.User_ID
You will get all combinations of White_Rules.Rule_ID and Black_Rules.Rule_ID:
User_ID | Source_ID | WhiteRuleID | BlackRuleID
--------+-----------+-------------+-------------
1 | 1 | 1 | 3
1 | 1 | 2 | 4
1 | 1 | 1 | 3
1 | 1 | 2 | 4
So counting the results will return 4 white rules and 4 black rules, even though there are only 2 of each.
You should get the required results if you change your query to this:
SELECT Users.Source_ID,
SUM(COALESCE(w.TotalWhite, 0)) AS TotalWhite,
SUM(COALESCE(b.TotalBlack, 0)) AS TotalBlack,
SUM(COALESCE(g.TotalGeneral, 0)) AS TotalGeneral
FROM Users
LEFT JOIN
( SELECT Victim_ID, COUNT(*) AS TotalWhite
FROM White_Rules
GROUP BY Victim_ID
) w
ON w.Victim_ID = Users.User_ID
LEFT JOIN
( SELECT Victim_ID, COUNT(*) AS TotalBlack
FROM Black_Rules
GROUP BY Victim_ID
) b
ON b.Victim_ID = Users.User_ID
LEFT JOIN
( SELECT Victim_ID, COUNT(*) AS TotalGeneral
FROM General_Rules
GROUP BY Victim_ID
) g
ON g.Victim_ID = Users.User_ID
WHERE Deleted = 'f'
AND Source IS NOT NULL
GROUP BY Users.Source_ID
An alternative would be:
SELECT Users.Source_ID,
COUNT(Rules.TotalWhite) AS TotalWhite,
COUNT(Rules.TotalBlack) AS TotalBlack,
COUNT(Rules.TotalGeneral) AS TotalGeneral
FROM Users
LEFT JOIN
( SELECT Victim_ID, 1 AS TotalWhite, NULL AS TotalBlack, NULL AS TotalGeneral
FROM White_Rules
UNION ALL
SELECT Victim_ID, NULL AS TotalWhite, 1 AS TotalBlack, NULL AS TotalGeneral
FROM Black_Rules
UNION ALL
SELECT Victim_ID, NULL AS TotalWhite, NULL AS TotalBlack, 1 AS TotalGeneral
FROM General_Rules
) Rules
ON Rules.Victim_ID = Users.User_ID
WHERE Deleted = 'f'
AND Source IS NOT NULL
GROUP BY Users.Source_ID
qid & accept id:
(15122065, 15122150)
query:
SQL query for displaying specific data
soup:
For that, you need to look at all the numbers. The best way is using group by and having:
\nselect personid\nfrom person\ngroup by personid\nhaving sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0\n
\nThe having clause counts the number of records that are not those codes. If the count is 0, then the record is returned.
\nIf you want to be sure that all 5 codes are selected, then use this condition:
\nhaving sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0 and\n count(distinct code) = 5\n
\n
soup wrap:
For that, you need to look at all the numbers. The best way is using group by and having:
select personid
from person
group by personid
having sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0
The having clause counts the number of records that are not those codes. If the count is 0, then the record is returned.
If you want to be sure that all 5 codes are selected, then use this condition:
having sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0 and
count(distinct code) = 5
qid & accept id:
(15150057, 15150095)
query:
Adding a Date column based on the next row date value
soup:
The easiest way to do this is with a correlated subquery:
\nselect t.*,\n (select top 1 dateadd(day, -1, startDate )\n from tbl_temp t2\n where t2.aid = t.aid and\n t2.uid = t.uid and\n t2.startdate > t.startdate\n ) as endDate\nfrom tbl_temp t\n
\nTo get the current date, use isnull():
\nselect t.*,\n isnull((select top 1 dateadd(day, -1, startDate )\n from tbl_temp t2\n where t2.aid = t.aid and\n t2.uid = t.uid and\n t2.startdate > t.startdate\n ), getdate()\n ) as endDate\nfrom tbl_temp t\n
\nNormally, I would recommend coalesce() over isnull(). However, there is a bug in some versions of SQL Server where it evaluates the first argument twice. Normally, this doesn't make a difference, but with a subquery it does.
\nAnd finally, the use of sysdate makes me think of Oracle. The same approach will work there too.
\n
soup wrap:
The easiest way to do this is with a correlated subquery:
select t.*,
(select top 1 dateadd(day, -1, startDate )
from tbl_temp t2
where t2.aid = t.aid and
t2.uid = t.uid and
t2.startdate > t.startdate
) as endDate
from tbl_temp t
To get the current date, use isnull():
select t.*,
isnull((select top 1 dateadd(day, -1, startDate )
from tbl_temp t2
where t2.aid = t.aid and
t2.uid = t.uid and
t2.startdate > t.startdate
), getdate()
) as endDate
from tbl_temp t
Normally, I would recommend coalesce() over isnull(). However, there is a bug in some versions of SQL Server where it evaluates the first argument twice. Normally, this doesn't make a difference, but with a subquery it does.
And finally, the use of sysdate makes me think of Oracle. The same approach will work there too.
qid & accept id:
(15187839, 15187881)
query:
MYSQL How do I Select all emails from a table but limit number of emails with the same domain
soup:
SELECT\n MIN(email) AS address1\n IF(MAX(email)==MIN(email),NULL,MAX(email)) AS address2\nFROM emaillist\nGROUP BY substring_index(email, '@', -1);\n
\nand if you want them in one column
\nSELECT MIN(email) AS address1\nFROM emaillist\nGROUP BY substring_index(email, '@', -1)\nUNION\nSELECT MAX(email) AS address1\nFROM emaillist\nGROUP BY substring_index(email, '@', -1)\n
\n
soup wrap:
SELECT
MIN(email) AS address1
IF(MAX(email)==MIN(email),NULL,MAX(email)) AS address2
FROM emaillist
GROUP BY substring_index(email, '@', -1);
and if you want them in one column
SELECT MIN(email) AS address1
FROM emaillist
GROUP BY substring_index(email, '@', -1)
UNION
SELECT MAX(email) AS address1
FROM emaillist
GROUP BY substring_index(email, '@', -1)
qid & accept id:
(15203058, 15203349)
query:
Group the rows that are having the same value in specific field in MySQL
soup:
I'm not particullarly proud of this solution because it is not very clear, but at least it's fast and simple. If all of the items have "done" = 1 then the sum will be equal to the count SUM = COUNT
\nSELECT query_id, SUM(done) AS doneSum, COUNT(done) AS doneCnt \nFROM tbl \nGROUP BY query_id\n
\nAnd if you add a having clause you get the items that are "done".
\nHAVING doneSum = doneCnt\n
\nI'll let you format the solution properly, you can do a DIFERENCE to get the "not done" items or doneSum <> doneCnt.
\nBtw, SQL fiddle here.
\n
soup wrap:
I'm not particullarly proud of this solution because it is not very clear, but at least it's fast and simple. If all of the items have "done" = 1 then the sum will be equal to the count SUM = COUNT
SELECT query_id, SUM(done) AS doneSum, COUNT(done) AS doneCnt
FROM tbl
GROUP BY query_id
And if you add a having clause you get the items that are "done".
HAVING doneSum = doneCnt
I'll let you format the solution properly, you can do a DIFERENCE to get the "not done" items or doneSum <> doneCnt.
Btw, SQL fiddle here.
qid & accept id:
(15208232, 15208343)
query:
How to see if a field entry has a corresponding entry in another field?
soup:
Assuming there are no additional columns besides the 3 pairs listed, this can be done with a simple WHERE clause that tests for a non-NULL start date in each column along with a corresponding NULL end. If any of the three conditions is met, the Company will be returned.
\nSELECT DISTINCT Company\nFROM Table1\nWHERE\n (Start1 IS NOT NULL AND End1 IS NULL)\n OR (Start2 IS NOT NULL AND End2 IS NULL)\n OR (Start3 IS NOT NULL AND End3 IS NULL)\n
\nIf your empty fields are actually empty strings '' instead of NULL, substitute the empty string as in:
\n(Start1 <> '' AND End1 = '')\n
\nNote, the DISTINCT isn't needed if the Company column is a unique or primary key.
\n
soup wrap:
Assuming there are no additional columns besides the 3 pairs listed, this can be done with a simple WHERE clause that tests for a non-NULL start date in each column along with a corresponding NULL end. If any of the three conditions is met, the Company will be returned.
SELECT DISTINCT Company
FROM Table1
WHERE
(Start1 IS NOT NULL AND End1 IS NULL)
OR (Start2 IS NOT NULL AND End2 IS NULL)
OR (Start3 IS NOT NULL AND End3 IS NULL)
If your empty fields are actually empty strings '' instead of NULL, substitute the empty string as in:
(Start1 <> '' AND End1 = '')
Note, the DISTINCT isn't needed if the Company column is a unique or primary key.
qid & accept id:
(15237740, 15237755)
query:
select users have more than one distinct records in mysql
soup:
just add having clause
\nSELECT userId, COUNT(DISTINCT webpageId) AS count \nFROM visits \nGROUP BY userId\nHAVING COUNT(DISTINCT webpageId) > 1\n
\nbut if you only what the ID
\nSELECT userId\nFROM visits \nGROUP BY userId\nHAVING COUNT(DISTINCT webpageId) > 1\n
\n\n- SQLFiddle Demo
\n
\nthe reason why you are filtering on HAVING clause and not on WHERE is because, WHERE clause cannot support columns that where aggregated.
\n
soup wrap:
just add having clause
SELECT userId, COUNT(DISTINCT webpageId) AS count
FROM visits
GROUP BY userId
HAVING COUNT(DISTINCT webpageId) > 1
but if you only what the ID
SELECT userId
FROM visits
GROUP BY userId
HAVING COUNT(DISTINCT webpageId) > 1
the reason why you are filtering on HAVING clause and not on WHERE is because, WHERE clause cannot support columns that where aggregated.
qid & accept id:
(15243399, 15243631)
query:
Select all table names from Oracle DB
soup:
Try this
\nSELECT 'Existing Tables: ' || wm_concat(table_name) tablenames \n FROM user_tables;\n
\nFor the sample Oracle HR database it returns
\nTABLENAMES\n------------------------------------------------------------------------------------\nExisting Tables: REGIONS,LOCATIONS,DEPARTMENTS,JOBS,EMPLOYEES,JOB_HISTORY,COUNTRIES\n
\nUPDATE: Example with LISTAGG()
\nSELECT 'Existing Tables: ' || LISTAGG(table_name, ',') \n WITHIN GROUP (ORDER BY table_name) tablenames \n FROM user_tables;\n
\n
soup wrap:
Try this
SELECT 'Existing Tables: ' || wm_concat(table_name) tablenames
FROM user_tables;
For the sample Oracle HR database it returns
TABLENAMES
------------------------------------------------------------------------------------
Existing Tables: REGIONS,LOCATIONS,DEPARTMENTS,JOBS,EMPLOYEES,JOB_HISTORY,COUNTRIES
UPDATE: Example with LISTAGG()
SELECT 'Existing Tables: ' || LISTAGG(table_name, ',')
WITHIN GROUP (ORDER BY table_name) tablenames
FROM user_tables;
qid & accept id:
(15264563, 15264928)
query:
Aggregate on Datetime Column for Pivot
soup:
You could get it like this:
\nSELECT l1.EmpID\n , l1.LoginTime [SignIn]\n , l2.LoginTime [SignOut]\nFROM Login l1\nLEFT JOIN \n Login l2 ON \n l2.EmpID = l1.EmpID\nAND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND l2.status = 'SignOut'\nWHERE l1.status = 'SignIn'\n
\nNote that in case if you had more than one signin/signout per day for an employee and you wanted to get his first SignIn and last SignOut for a day, you would have to change the query:
\nSELECT l1.EmpID\n , MIN(l1.LoginTime) [SignIn]\n , MAX(l2.LoginTime) [SignOut]\nFROM Login l1\nLEFT JOIN \n Login l2 ON \n l2.EmpID = l1.EmpID\nAND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND l2.status = 'SignOut'\nWHERE l1.status = 'SignIn'\nGROUP BY\n l1.EmpID, CAST(l1.LoginTime AS DATE)\n
\nAnd here is another query that also works for multiple signin/signouts of a user during the same day. This will list all of his signin/signouts in a day:
\n;WITH cte1 AS\n(\n SELECT *\n , ROW_NUMBER() OVER \n (PARTITION BY EmpID, CAST(LoginTime AS DATE) ORDER BY LoginTime) \n AS num\n FROM Login\n)\n\nSELECT l1.EmpID\n , l1.LoginTime [SignIn]\n , l2.LoginTime [SignOut]\nFROM cte1 l1\nLEFT JOIN \n cte1 l2 ON \n l2.EmpID = l1.EmpID\nAND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND l2.num = l1.num + 1\nWHERE l1.status = 'SignIn'\n
\nHere is SQL Fiddle for last two queries that handle multiple signin/signout scenarios of a user in a single day, for that purpose I added user with EmpID 102 to sample data.
\n
soup wrap:
You could get it like this:
SELECT l1.EmpID
, l1.LoginTime [SignIn]
, l2.LoginTime [SignOut]
FROM Login l1
LEFT JOIN
Login l2 ON
l2.EmpID = l1.EmpID
AND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
AND l2.status = 'SignOut'
WHERE l1.status = 'SignIn'
Note that in case if you had more than one signin/signout per day for an employee and you wanted to get his first SignIn and last SignOut for a day, you would have to change the query:
SELECT l1.EmpID
, MIN(l1.LoginTime) [SignIn]
, MAX(l2.LoginTime) [SignOut]
FROM Login l1
LEFT JOIN
Login l2 ON
l2.EmpID = l1.EmpID
AND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
AND l2.status = 'SignOut'
WHERE l1.status = 'SignIn'
GROUP BY
l1.EmpID, CAST(l1.LoginTime AS DATE)
And here is another query that also works for multiple signin/signouts of a user during the same day. This will list all of his signin/signouts in a day:
;WITH cte1 AS
(
SELECT *
, ROW_NUMBER() OVER
(PARTITION BY EmpID, CAST(LoginTime AS DATE) ORDER BY LoginTime)
AS num
FROM Login
)
SELECT l1.EmpID
, l1.LoginTime [SignIn]
, l2.LoginTime [SignOut]
FROM cte1 l1
LEFT JOIN
cte1 l2 ON
l2.EmpID = l1.EmpID
AND CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
AND l2.num = l1.num + 1
WHERE l1.status = 'SignIn'
Here is SQL Fiddle for last two queries that handle multiple signin/signout scenarios of a user in a single day, for that purpose I added user with EmpID 102 to sample data.
qid & accept id:
(15302356, 15302788)
query:
How can I use XSLT to combine two XML docs, similar to a SQL JOIN
soup:
Accomplishing this in XSLT is not quite as straightforward as it would be in SQL, but assuming you assembled the two input files into a single document ahead of time (which I would recommend if it's not problematic for you):
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\nThis XSLT can be used to join the data together:
\n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n \n
\nWhen this is run on the input XML above, it produces:
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\nAnd if you can change your XML a little to indicate the outer and inner group, and which attribute to match on, like this:
\n\n \n ....\n \n \n ....\n \n \n
\nThen you could use this more generic XSLT which, while less efficient, should work for any input similar to the above:
\n\n \n\n \n \n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n \n \n \n\n \n \n \n \n \n\n \n \n \n
\n
soup wrap:
Accomplishing this in XSLT is not quite as straightforward as it would be in SQL, but assuming you assembled the two input files into a single document ahead of time (which I would recommend if it's not problematic for you):
This XSLT can be used to join the data together:
When this is run on the input XML above, it produces:
And if you can change your XML a little to indicate the outer and inner group, and which attribute to match on, like this:
....
....
Then you could use this more generic XSLT which, while less efficient, should work for any input similar to the above:
qid & accept id:
(15357576, 15357628)
query:
select where.... electrical status is required in ms sql 2005
soup:
You can simply do this:
\nSELECT DISTINCT SONO, ElectricalStatus\nFROM tablename\nWHERE ElectricalStatus = 'Required';\n
\n\nthis will give you:
\n| SONO | ELECTRICALSTATUS |\n---------------------------\n| 1 | Required |\n| 2 | Required |\n
\n
soup wrap:
You can simply do this:
SELECT DISTINCT SONO, ElectricalStatus
FROM tablename
WHERE ElectricalStatus = 'Required';
this will give you:
| SONO | ELECTRICALSTATUS |
---------------------------
| 1 | Required |
| 2 | Required |
qid & accept id:
(15359303, 15359581)
query:
Change table contents to match a query without deleting all rows
soup:
delete from tblA where\n (col1, col2, ...) not in (queryB);\n\ninsert into tblA \n (queryB) minus (select * from tblA);\n
\n
\nEDIT :
\nYou can calculate queryB once if small temporary table will be created (which will contain < 10% of rows of table tblA).
\nIt is assumed that queryB.col1 is never null
\ncreate table diff as\n select \n ta.rowid ta_rid, \n tb.*\n from tblA ta \n full join (queryB) tb \n on ta.col1 = tb.col1 \n and ta.col2 = tb.col2 \n and ta.col3 = tb.col3 \n where \n ta.rowid is null or tb.col1 is null; \n\ndelete from tblA ta \n where ta.rowid in (select d.ta_rid from diff d where d.ta_rid is not null);\ninsert into tblA ta \n select d.col1, d.col2, d.col3 from diff d where d.ta_rid is null; \n
\n
soup wrap:
delete from tblA where
(col1, col2, ...) not in (queryB);
insert into tblA
(queryB) minus (select * from tblA);
EDIT :
You can calculate queryB once if small temporary table will be created (which will contain < 10% of rows of table tblA).
It is assumed that queryB.col1 is never null
create table diff as
select
ta.rowid ta_rid,
tb.*
from tblA ta
full join (queryB) tb
on ta.col1 = tb.col1
and ta.col2 = tb.col2
and ta.col3 = tb.col3
where
ta.rowid is null or tb.col1 is null;
delete from tblA ta
where ta.rowid in (select d.ta_rid from diff d where d.ta_rid is not null);
insert into tblA ta
select d.col1, d.col2, d.col3 from diff d where d.ta_rid is null;
qid & accept id:
(15376335, 15376369)
query:
Pivoting two colums leaving other columns in a table unchanged
soup:
SELECT Product_ID, Date, Colour, Size, Material\nFROM\n (\n SELECT Product_ID, Date, Attribute, Value\n FROM Table1\n ) org\n PIVOT\n (\n MAX(Value)\n FOR Attribute IN (Colour, Size, Material)\n ) pivotHeader\n
\n\n- SQLFiddle Demo
\n
\nOUTPUT
\n╔════════════╦══════╦════════╦════════╦══════════╗\n║ PRODUCT_ID ║ DATE ║ COLOUR ║ SIZE ║ MATERIAL ║\n╠════════════╬══════╬════════╬════════╬══════════╣\n║ 10025135 ║ 2009 ║ Red ║ 20 cm ║ Steel ║\n║ 10025135 ║ 2010 ║ Green ║ (null) ║ Alloy ║\n║ 10025136 ║ 2009 ║ Black ║ 30cm ║ (null) ║\n╚════════════╩══════╩════════╩════════╩══════════╝\n
\nThe other way of doing this is by using MAX() and CASE
\nSELECT Product_ID, DATE,\n MAX(CASE WHEN Attribute = 'Colour' THEN Value END ) Colour,\n MAX(CASE WHEN Attribute = 'Size' THEN Value END ) Size,\n MAX(CASE WHEN Attribute = 'Material' THEN Value END ) Material\nFROM Table1\nGROUP BY Product_ID, DATE\n
\n\n- SQLFiddle Demo
\n
\n
soup wrap:
SELECT Product_ID, Date, Colour, Size, Material
FROM
(
SELECT Product_ID, Date, Attribute, Value
FROM Table1
) org
PIVOT
(
MAX(Value)
FOR Attribute IN (Colour, Size, Material)
) pivotHeader
OUTPUT
╔════════════╦══════╦════════╦════════╦══════════╗
║ PRODUCT_ID ║ DATE ║ COLOUR ║ SIZE ║ MATERIAL ║
╠════════════╬══════╬════════╬════════╬══════════╣
║ 10025135 ║ 2009 ║ Red ║ 20 cm ║ Steel ║
║ 10025135 ║ 2010 ║ Green ║ (null) ║ Alloy ║
║ 10025136 ║ 2009 ║ Black ║ 30cm ║ (null) ║
╚════════════╩══════╩════════╩════════╩══════════╝
The other way of doing this is by using MAX() and CASE
SELECT Product_ID, DATE,
MAX(CASE WHEN Attribute = 'Colour' THEN Value END ) Colour,
MAX(CASE WHEN Attribute = 'Size' THEN Value END ) Size,
MAX(CASE WHEN Attribute = 'Material' THEN Value END ) Material
FROM Table1
GROUP BY Product_ID, DATE
qid & accept id:
(15387808, 15387854)
query:
MySQL Join two tables count and sum from second table
soup:
You could use two sub-queries:
\nSELECT a.*\n , (SELECT Count(b.id) FROM inquiries I1 WHERE I1.dealer_id = a.id) as counttotal\n , (SELECT SUM(b.cost) FROM inquiries I2 WHERE I2.dealer_id = a.id) as turnover\nFROM dealers a\nORDER BY name ASC\n
\nOr
\nSELECT a.*\n , COALESCE(T.counttotal, 0) as counttotal -- use coalesce or equiv. to turn NULLs to 0\n , COALESCE(T.turnover, 0) as turnover -- use coalesce or equiv. to turn NULLs to 0\n FROM dealers a\n LEFT OUTER JOIN (SELECT a.id, Count(b.id) as counttotal, SUM(b.cost) as turnover\n FROM dealers a1 \n INNER JOIN inquiries b ON a1.id = b.dealer_id\n GROUP BY a.id) T\n ON a.id = T.id\nORDER BY a.name\n
\n
soup wrap:
You could use two sub-queries:
SELECT a.*
, (SELECT Count(b.id) FROM inquiries I1 WHERE I1.dealer_id = a.id) as counttotal
, (SELECT SUM(b.cost) FROM inquiries I2 WHERE I2.dealer_id = a.id) as turnover
FROM dealers a
ORDER BY name ASC
Or
SELECT a.*
, COALESCE(T.counttotal, 0) as counttotal -- use coalesce or equiv. to turn NULLs to 0
, COALESCE(T.turnover, 0) as turnover -- use coalesce or equiv. to turn NULLs to 0
FROM dealers a
LEFT OUTER JOIN (SELECT a.id, Count(b.id) as counttotal, SUM(b.cost) as turnover
FROM dealers a1
INNER JOIN inquiries b ON a1.id = b.dealer_id
GROUP BY a.id) T
ON a.id = T.id
ORDER BY a.name
qid & accept id:
(15400897, 15401004)
query:
How to change date in database
soup:
Try this -
\nUPDATE TABLE set fieldname = DATE_ADD( fieldname, INTERVAL 3 YEAR ) \n
\nFor more information and play part with dates you can check this link :-
\n\nWorking Fiddle -- http://sqlfiddle.com/#!2/9c669/1
\nEDIT
\nThis solution updates date type is VARCHAR and structure of date like - 2 January 2001
\nIt will update date to 2 January 2004 by the interval of 3
\nAlthough the best way to handle date is use date DATATYPEs(ex timestamp, datetime etc) instead of saving it in VARCHARs
\nTested code --
\nUPDATE date \nSET `varchardate`= DATE_FORMAT(DATE_ADD( str_to_date(`varchardate`, '%d %M %Y'), INTERVAL 3 YEAR ) , '%d %M %Y')\n
\n
soup wrap:
Try this -
UPDATE TABLE set fieldname = DATE_ADD( fieldname, INTERVAL 3 YEAR )
For more information and play part with dates you can check this link :-
Working Fiddle -- http://sqlfiddle.com/#!2/9c669/1
EDIT
This solution updates date type is VARCHAR and structure of date like - 2 January 2001
It will update date to 2 January 2004 by the interval of 3
Although the best way to handle date is use date DATATYPEs(ex timestamp, datetime etc) instead of saving it in VARCHARs
Tested code --
UPDATE date
SET `varchardate`= DATE_FORMAT(DATE_ADD( str_to_date(`varchardate`, '%d %M %Y'), INTERVAL 3 YEAR ) , '%d %M %Y')
qid & accept id:
(15414398, 15414661)
query:
Merge two or more columns dynamically based on table columns?
soup:
You will want to use the PIVOT function to transform the data from columns into rows. If you are going to have an unknown number of values that need to be columns, then you will need to use dynamic SQL.
\nIt is easier to see a static or hard-coded version first and then convert it into a dynamic SQL version. A static version is used when you have a known number of values:
\nselect *\nfrom\n(\n select e.employeeid,\n s.subsection +'_'+s.sectioncode+'_Cost' Section,\n e.cost\n from employee e\n inner join sectionnames s\n on e.sectionid = s.sectionid\n) src\npivot\n(\n max(cost)\n for section in (Individual_xYz_Cost, Family_xYz_Cost,\n Friends_CYD_Cost, level1_PCPO_Cost,\n level2_PCPO_Cost, level3_PCPO_Cost)\n) piv;\n
\nSee SQL Fiddle with Demo.
\nIf you need the query to be flexible, then you will convert this to use dynamic SQL:
\nDECLARE @cols AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT ',' + QUOTENAME(subsection +'_'+sectioncode+'_Cost') \n from SectionNames\n group by subsection, sectioncode, sectionid\n order by sectionid\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nset @query = 'SELECT employeeid,' + @cols + ' \n from \n (\n select e.employeeid,\n s.subsection +''_''+s.sectioncode+''_Cost'' Section,\n e.cost\n from employee e\n inner join sectionnames s\n on e.sectionid = s.sectionid\n ) x\n pivot \n (\n max(cost)\n for section in (' + @cols + ')\n ) p '\n\nexecute(@query)\n
\n\nThe result of both is:
\n| EMPLOYEEID | INDIVIDUAL_XYZ_COST | FAMILY_XYZ_COST | FRIENDS_CYD_COST | LEVEL1_PCPO_COST | LEVEL2_PCPO_COST | LEVEL3_PCPO_COST |\n----------------------------------------------------------------------------------------------------------------------------------\n| 1 | $200 | $300 | $40 | $10 | No Level | No Level |\n
\n
soup wrap:
You will want to use the PIVOT function to transform the data from columns into rows. If you are going to have an unknown number of values that need to be columns, then you will need to use dynamic SQL.
It is easier to see a static or hard-coded version first and then convert it into a dynamic SQL version. A static version is used when you have a known number of values:
select *
from
(
select e.employeeid,
s.subsection +'_'+s.sectioncode+'_Cost' Section,
e.cost
from employee e
inner join sectionnames s
on e.sectionid = s.sectionid
) src
pivot
(
max(cost)
for section in (Individual_xYz_Cost, Family_xYz_Cost,
Friends_CYD_Cost, level1_PCPO_Cost,
level2_PCPO_Cost, level3_PCPO_Cost)
) piv;
See SQL Fiddle with Demo.
If you need the query to be flexible, then you will convert this to use dynamic SQL:
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(subsection +'_'+sectioncode+'_Cost')
from SectionNames
group by subsection, sectioncode, sectionid
order by sectionid
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT employeeid,' + @cols + '
from
(
select e.employeeid,
s.subsection +''_''+s.sectioncode+''_Cost'' Section,
e.cost
from employee e
inner join sectionnames s
on e.sectionid = s.sectionid
) x
pivot
(
max(cost)
for section in (' + @cols + ')
) p '
execute(@query)
The result of both is:
| EMPLOYEEID | INDIVIDUAL_XYZ_COST | FAMILY_XYZ_COST | FRIENDS_CYD_COST | LEVEL1_PCPO_COST | LEVEL2_PCPO_COST | LEVEL3_PCPO_COST |
----------------------------------------------------------------------------------------------------------------------------------
| 1 | $200 | $300 | $40 | $10 | No Level | No Level |
qid & accept id:
(15420689, 15422128)
query:
How do you do a PostgreSQL fulltext search on encoded or encrypted data?
soup:
Encrypted values
\nFor encrypted values you can't. Even if you created the tsvector client-side, the tsvector would contain a form of the encrypted text so it wouldn't be acceptable for most applications. Observe:
\nregress=> SELECT to_tsvector('my secret password is CandyStrip3r');\n to_tsvector \n------------------------------------------\n 'candystrip3r':5 'password':3 'secret':2\n(1 row)\n
\n... whoops. It doesn't matter if you create that value client side instead of using to_tsvector, it'll still have your password in cleartext. You could encrypt the tsvector, but then you couldn't use it for fulltext seach.
\nSure, given the encrypted value:
\nCREATE EXTENSION pgcrypto;\n\nregress=> SELECT encrypt( convert_to('my s3kritPassw1rd','utf-8'), '\xdeadbeef', 'aes');\n encrypt \n--------------------------------------------------------------------\n \x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff\n(1 row)\n
\nyou can (but shouldn't) do something like this:
\nregress=> SELECT to_tsvector( convert_from(decrypt('\x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff', '\xdeadbeef', 'aes'), 'utf-8') );\n to_tsvector \n--------------------\n 's3kritpassw1rd':2\n(1 row)\n
\n... but if the problems with that aren't immediately obvious after scrolling right in the code display box then you should really be getting somebody else to do your security design for you ;-)
\nThere's been tons of research on ways to perform operations on encrypted values without decrypting them, like adding two encrypted numbers together to produce a result that's encrypted with the same key, so the process doing the adding doesn't need the ability to decrypt the inputs in order to get the output. It's possible some of this could be applied to fts - but it's way beyond my level of expertise in the area and likely to be horribly inefficient and/or cryptographically weak anyway.
\nBase64-encoded values
\nFor base64 you decode the base64 before feeding it into to_tsvector. Because decode returns a bytea and you know the encoded data is text you need to use convert_from to decode the bytea into text in the database encoding, eg:
\nregress=> SELECT encode(convert_to('some text to search','utf-8'), 'base64');\n encode \n------------------------------\n c29tZSB0ZXh0IHRvIHNlYXJjaA==\n(1 row)\n\nregress=> SELECT to_tsvector(convert_from( decode('c29tZSB0ZXh0IHRvIHNlYXJjaA==', 'base64'), getdatabaseencoding() ));\n to_tsvector \n---------------------\n 'search':4 'text':2\n(1 row)\n
\nIn this case I've used the database encoding as the input to convert_from, but you need to make sure you use the encoding that the underlying base64 encoded text was in. Your application is responsible for getting this right. I suggest either storing the encoding in a 2nd column or ensuring that your application always encodes the text as utf-8 before applying base64 encoding.
\n
soup wrap:
Encrypted values
For encrypted values you can't. Even if you created the tsvector client-side, the tsvector would contain a form of the encrypted text so it wouldn't be acceptable for most applications. Observe:
regress=> SELECT to_tsvector('my secret password is CandyStrip3r');
to_tsvector
------------------------------------------
'candystrip3r':5 'password':3 'secret':2
(1 row)
... whoops. It doesn't matter if you create that value client side instead of using to_tsvector, it'll still have your password in cleartext. You could encrypt the tsvector, but then you couldn't use it for fulltext seach.
Sure, given the encrypted value:
CREATE EXTENSION pgcrypto;
regress=> SELECT encrypt( convert_to('my s3kritPassw1rd','utf-8'), '\xdeadbeef', 'aes');
encrypt
--------------------------------------------------------------------
\x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff
(1 row)
you can (but shouldn't) do something like this:
regress=> SELECT to_tsvector( convert_from(decrypt('\x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff', '\xdeadbeef', 'aes'), 'utf-8') );
to_tsvector
--------------------
's3kritpassw1rd':2
(1 row)
... but if the problems with that aren't immediately obvious after scrolling right in the code display box then you should really be getting somebody else to do your security design for you ;-)
There's been tons of research on ways to perform operations on encrypted values without decrypting them, like adding two encrypted numbers together to produce a result that's encrypted with the same key, so the process doing the adding doesn't need the ability to decrypt the inputs in order to get the output. It's possible some of this could be applied to fts - but it's way beyond my level of expertise in the area and likely to be horribly inefficient and/or cryptographically weak anyway.
Base64-encoded values
For base64 you decode the base64 before feeding it into to_tsvector. Because decode returns a bytea and you know the encoded data is text you need to use convert_from to decode the bytea into text in the database encoding, eg:
regress=> SELECT encode(convert_to('some text to search','utf-8'), 'base64');
encode
------------------------------
c29tZSB0ZXh0IHRvIHNlYXJjaA==
(1 row)
regress=> SELECT to_tsvector(convert_from( decode('c29tZSB0ZXh0IHRvIHNlYXJjaA==', 'base64'), getdatabaseencoding() ));
to_tsvector
---------------------
'search':4 'text':2
(1 row)
In this case I've used the database encoding as the input to convert_from, but you need to make sure you use the encoding that the underlying base64 encoded text was in. Your application is responsible for getting this right. I suggest either storing the encoding in a 2nd column or ensuring that your application always encodes the text as utf-8 before applying base64 encoding.
qid & accept id:
(15428168, 15428204)
query:
SQL Server - Create a copy of a database table and place it in the same database?
soup:
Use SELECT ... INTO:
\nSELECT *\nINTO ABC_1\nFROM ABC;\n
\nThis will create a new table ABC_1 that has the same column structure as ABC and contains the same data. Constraints (e.g. keys, default values), however, are -not- copied.
\nYou can run this query multiple times with a different table name each time.
\n
\nIf you don't need to copy the data, only to create a new empty table with the same column structure, add a WHERE clause with a falsy expression:
\nSELECT *\nINTO ABC_1\nFROM ABC\nWHERE 1 <> 1;\n
\n
soup wrap:
Use SELECT ... INTO:
SELECT *
INTO ABC_1
FROM ABC;
This will create a new table ABC_1 that has the same column structure as ABC and contains the same data. Constraints (e.g. keys, default values), however, are -not- copied.
You can run this query multiple times with a different table name each time.
If you don't need to copy the data, only to create a new empty table with the same column structure, add a WHERE clause with a falsy expression:
SELECT *
INTO ABC_1
FROM ABC
WHERE 1 <> 1;
qid & accept id:
(15436509, 15436649)
query:
SQL: Copy some field values to another record inside the same table
soup:
UPDATE data a\n INNER JOIN data b\n ON a.originalid = b.id\nSET a.data = b.data\n
\n\n- SQLFiddle Demo
\n
\nOUTPUT
\n╔════╦════════════╦════════════╗\n║ ID ║ ORIGINALID ║ STRING ║\n╠════╬════════════╬════════════╣\n║ 1 ║ (null) ║ original 1 ║\n║ 2 ║ (null) ║ original 2 ║\n║ 3 ║ 1 ║ original 1 ║\n║ 4 ║ 2 ║ original 2 ║\n║ 5 ║ 2 ║ original 2 ║\n╚════╩════════════╩════════════╝\n
\n
soup wrap:
UPDATE data a
INNER JOIN data b
ON a.originalid = b.id
SET a.data = b.data
OUTPUT
╔════╦════════════╦════════════╗
║ ID ║ ORIGINALID ║ STRING ║
╠════╬════════════╬════════════╣
║ 1 ║ (null) ║ original 1 ║
║ 2 ║ (null) ║ original 2 ║
║ 3 ║ 1 ║ original 1 ║
║ 4 ║ 2 ║ original 2 ║
║ 5 ║ 2 ║ original 2 ║
╚════╩════════════╩════════════╝
qid & accept id:
(15445216, 15445327)
query:
How to get last day of a month from a given date?
soup:
Oracle has a last_day() function:
\nSELECT LAST_DAY(to_date('04/04/1924','MM/DD/YYYY')) from dual;\n\nSELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -1)) from dual;\n\nSELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -2)) from dual;\n
\nResults:
\nApril, 30 1924 00:00:00+0000\n\nMarch, 31 1924 00:00:00+0000\n\nFebruary, 29 1924 00:00:00+0000\n
\nUse Add_Months() on your date to get the appropriate month, and then apply last_day().
\n
soup wrap:
Oracle has a last_day() function:
SELECT LAST_DAY(to_date('04/04/1924','MM/DD/YYYY')) from dual;
SELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -1)) from dual;
SELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -2)) from dual;
Results:
April, 30 1924 00:00:00+0000
March, 31 1924 00:00:00+0000
February, 29 1924 00:00:00+0000
Use Add_Months() on your date to get the appropriate month, and then apply last_day().
qid & accept id:
(15448705, 15448712)
query:
Maximum of the count of the grouped elements
soup:
just add TOP to limit the number of results
\nselect TOP 1 COUNT(*) as 'Number of times a product is sold at same quantity' \nfrom Sales.SalesOrderDetail \ngroup by OrderQty, ProductID \norder by COUNT(*) desc\n
\n\n- SQLFiddle Demo (different records but have the same thought)
\n- SQLFiddle Demo (uses CTE and Window Function)
\n
\n
\nUPDATE 1
\nWITH results \nAS\n(\n select COUNT(*) as [Number of times a product is sold at same quantity],\n DENSE_RANK() OVER (ORDER BY COUNT(*) DESC) rank_no \n from Sales.SalesOrderDetail \n group by OrderQty, ProductID \n)\nSELECT [Number of times a product is sold at same quantity]\nFROM results\nWHERE rank_no = 2\n
\n\n
soup wrap:
just add TOP to limit the number of results
select TOP 1 COUNT(*) as 'Number of times a product is sold at same quantity'
from Sales.SalesOrderDetail
group by OrderQty, ProductID
order by COUNT(*) desc
- SQLFiddle Demo (different records but have the same thought)
- SQLFiddle Demo (uses CTE and Window Function)
UPDATE 1
WITH results
AS
(
select COUNT(*) as [Number of times a product is sold at same quantity],
DENSE_RANK() OVER (ORDER BY COUNT(*) DESC) rank_no
from Sales.SalesOrderDetail
group by OrderQty, ProductID
)
SELECT [Number of times a product is sold at same quantity]
FROM results
WHERE rank_no = 2
qid & accept id:
(15512015, 15513671)
query:
Update PostgreSQL table with values from self
soup:
Correlated subqueries are infamous for abysmal performance. Doesn't matter much for small tables, matters a lot for big tables. Use one of these instead, preferably the second:
\nQuery 1
\nWITH cte AS (\n SELECT *, dense_rank() OVER (ORDER BY dob) AS drk\n FROM person\n )\nUPDATE person p\nSET younger_sibling_name = y.name\n ,younger_sibling_dob = y.dob\nFROM cte x\nJOIN (SELECT DISTINCT ON (drk) * FROM cte) y ON y.drk = x.drk + 1\nWHERE x.pid = p.pid;\n
\n-> SQLfiddle (with extended test case)
\n\nIn the CTE cte use the window function dense_rank() to get a rank without gaps according to the dop for every person.
\nJoin cte to itself, but remove duplicates on dob from the second instance. Thereby everybody gets exactly one UPDATE. If more than one person share the same dop, the same one is selected as younger sibling for all persons on the next dob. I do this with:
\n(SELECT DISTINCT ON (rnk) * FROM cte)\n
\nAdd ORDER BY rnk, ... if you want to pick a particular person for every dob.
\nIf no younger person exists, no UPDATE happens and the columns stay NULL.
\nIndices on dob and pid make this fast.
\n
\nQuery 2
\nWITH cte AS (\n SELECT dob, min(name) AS name\n ,row_number() OVER (ORDER BY dob) rn\n FROM person p\n GROUP BY dob\n )\nUPDATE person p\nSET younger_sibling_name = y.name\n ,younger_sibling_dob = y.dob\nFROM cte x\nJOIN cte y ON y.rn = x.rn + 1\nWHERE x.dob = p.dob;\n
\n\n\nThis works, because aggregate functions are applied before window functions. And it should be very fast, since both operations agree on the sort order.
\nObviates the need for a later DISTINCT like in query 1.
\nResult is the same as query 1, exactly.
\nAgain, you can add more columns to ORDER BY to pick a particular person for every dob.
\nOnly needs an index on dob to be fast.
\n
\n
soup wrap:
Correlated subqueries are infamous for abysmal performance. Doesn't matter much for small tables, matters a lot for big tables. Use one of these instead, preferably the second:
Query 1
WITH cte AS (
SELECT *, dense_rank() OVER (ORDER BY dob) AS drk
FROM person
)
UPDATE person p
SET younger_sibling_name = y.name
,younger_sibling_dob = y.dob
FROM cte x
JOIN (SELECT DISTINCT ON (drk) * FROM cte) y ON y.drk = x.drk + 1
WHERE x.pid = p.pid;
-> SQLfiddle (with extended test case)
In the CTE cte use the window function dense_rank() to get a rank without gaps according to the dop for every person.
Join cte to itself, but remove duplicates on dob from the second instance. Thereby everybody gets exactly one UPDATE. If more than one person share the same dop, the same one is selected as younger sibling for all persons on the next dob. I do this with:
(SELECT DISTINCT ON (rnk) * FROM cte)
Add ORDER BY rnk, ... if you want to pick a particular person for every dob.
If no younger person exists, no UPDATE happens and the columns stay NULL.
Indices on dob and pid make this fast.
Query 2
WITH cte AS (
SELECT dob, min(name) AS name
,row_number() OVER (ORDER BY dob) rn
FROM person p
GROUP BY dob
)
UPDATE person p
SET younger_sibling_name = y.name
,younger_sibling_dob = y.dob
FROM cte x
JOIN cte y ON y.rn = x.rn + 1
WHERE x.dob = p.dob;
This works, because aggregate functions are applied before window functions. And it should be very fast, since both operations agree on the sort order.
Obviates the need for a later DISTINCT like in query 1.
Result is the same as query 1, exactly.
Again, you can add more columns to ORDER BY to pick a particular person for every dob.
Only needs an index on dob to be fast.
qid & accept id:
(15532084, 15532181)
query:
How do I add a calculated column in sql workbench / j
soup:
You can do this by :
\n ALTER TABLE table_one\n ADD COLUMN test_column VARCHAR(100) NULL;\n\n GO;\n
\nthen update all rows by :
\nUPDATE table_one\nSET test_column = (CASE WHEN LEFT(name,3) = "Ads" THEN "ok" ELSE "no" END) \n
\n
soup wrap:
You can do this by :
ALTER TABLE table_one
ADD COLUMN test_column VARCHAR(100) NULL;
GO;
then update all rows by :
UPDATE table_one
SET test_column = (CASE WHEN LEFT(name,3) = "Ads" THEN "ok" ELSE "no" END)
qid & accept id:
(15541196, 15541225)
query:
how to fetch all data from one table in mysql?
soup:
Use LEFT JOIN instead:
\nSELECT \n m.medianame,\n IFNULL(COUNT(ad.id), 0) AS Total \nFROM a_mediatype as m\nLEFT JOIN a_advertise AS a ON a.mediaTypeId = m.mediaId\nLEFT JOIN a_ad_display AS ad ON ad.advId = a.advId\nLEFT JOIN organization_ AS o ON a.organizationId = o.organizationId\nLEFT JOIN organization_ AS p ON o.organizationId = p.organizationId \n AND p.organizationId = '37423' \n AND o.treePath LIKE CONCAT( p.treePath, '%')\nGROUP BY m.medianame;\n
\nSQL Fiddle Demo
\nThis will give you:
\n| MEDIANAME | TOTAL |\n---------------------\n| animation | 13 |\n| image | 2 |\n| video | 0 |\n
\n
soup wrap:
Use LEFT JOIN instead:
SELECT
m.medianame,
IFNULL(COUNT(ad.id), 0) AS Total
FROM a_mediatype as m
LEFT JOIN a_advertise AS a ON a.mediaTypeId = m.mediaId
LEFT JOIN a_ad_display AS ad ON ad.advId = a.advId
LEFT JOIN organization_ AS o ON a.organizationId = o.organizationId
LEFT JOIN organization_ AS p ON o.organizationId = p.organizationId
AND p.organizationId = '37423'
AND o.treePath LIKE CONCAT( p.treePath, '%')
GROUP BY m.medianame;
SQL Fiddle Demo
This will give you:
| MEDIANAME | TOTAL |
---------------------
| animation | 13 |
| image | 2 |
| video | 0 |
qid & accept id:
(15543977, 15546165)
query:
MS SQL Server 2008 :Getting start date and end date of the week to next 8 weeks
soup:
Try this:
\nDECLARE @startDate DATETIME\nDECLARE @currentDate DATETIME\nDECLARE @numberOfWeeks INT\n\nDECLARE @dates TABLE(\n StartDate DateTime,\n EndDate DateTime \n)\n\nSET @startDate = GETDATE()--'2012-01-01' -- Put whatever you want here\nSET @numberOfWeeks = 8 -- Choose number of weeks here\nSET @currentDate = @startDate\n\nwhile @currentDate < dateadd(week, @numberOfWeeks, @startDate)\nbegin\n INSERT INTO @Dates(StartDate, EndDate) VALUES (@currentDate, dateadd(day, 6, @currentDate))\n set @currentDate = dateadd(day, 7, @currentDate);\nend\n\nSELECT * FROM @dates\n
\nThis will give you something like this:
\nStartDate EndDate \n21/03/2013 11:22:46 27/03/2013 11:22:46 \n28/03/2013 11:22:46 03/04/2013 11:22:46 \n04/04/2013 11:22:46 10/04/2013 11:22:46 \n11/04/2013 11:22:46 17/04/2013 11:22:46 \n18/04/2013 11:22:46 24/04/2013 11:22:46 \n25/04/2013 11:22:46 01/05/2013 11:22:46 \n02/05/2013 11:22:46 08/05/2013 11:22:46 \n09/05/2013 11:22:46 15/05/2013 11:22:46 \n
\nOr you could tweak the final select if you don't want the time component, like this:
\nSELECT CONVERT(VARCHAR, StartDate, 103), CONVERT(VARCHAR, EndDate, 103) FROM @dates\n
\n
soup wrap:
Try this:
DECLARE @startDate DATETIME
DECLARE @currentDate DATETIME
DECLARE @numberOfWeeks INT
DECLARE @dates TABLE(
StartDate DateTime,
EndDate DateTime
)
SET @startDate = GETDATE()--'2012-01-01' -- Put whatever you want here
SET @numberOfWeeks = 8 -- Choose number of weeks here
SET @currentDate = @startDate
while @currentDate < dateadd(week, @numberOfWeeks, @startDate)
begin
INSERT INTO @Dates(StartDate, EndDate) VALUES (@currentDate, dateadd(day, 6, @currentDate))
set @currentDate = dateadd(day, 7, @currentDate);
end
SELECT * FROM @dates
This will give you something like this:
StartDate EndDate
21/03/2013 11:22:46 27/03/2013 11:22:46
28/03/2013 11:22:46 03/04/2013 11:22:46
04/04/2013 11:22:46 10/04/2013 11:22:46
11/04/2013 11:22:46 17/04/2013 11:22:46
18/04/2013 11:22:46 24/04/2013 11:22:46
25/04/2013 11:22:46 01/05/2013 11:22:46
02/05/2013 11:22:46 08/05/2013 11:22:46
09/05/2013 11:22:46 15/05/2013 11:22:46
Or you could tweak the final select if you don't want the time component, like this:
SELECT CONVERT(VARCHAR, StartDate, 103), CONVERT(VARCHAR, EndDate, 103) FROM @dates
qid & accept id:
(15559090, 15560009)
query:
Combine two tables into a new one so that select rows from the other one are ignored
soup:
According to your description, the query could look like this:
\nI use LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. NOT EXISTS would be the other good option.
\nUNION simply doesn't do what you describe.
\nCREATE TABLE AS \nSELECT date, location_code, product_code, quantity\nFROM transactions_kitchen k\n\nUNION ALL\nSELECT h.date, h.location_code, h.product_code, h.quantity\nFROM transactions_admin h\nLEFT JOIN transactions_kitchen k USING (location_code, date)\nWHERE k.location_code IS NULL;\n
\nUse CREATE TABLE AS instead of SELECT INTO.
\nI quote the manual on SELECT INTO:
\n\nCREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS\n is the recommended syntax, since this form of SELECT INTO is not\n available in ECPG or PL/pgSQL, because they interpret the INTO clause\n differently. Furthermore, CREATE TABLE AS offers a superset of the\n functionality provided by SELECT INTO.
\n
\nOr, if the target table already exists:
\nINSERT INTO transactions_combined ()\nSELECT ...\n
\nI would advise not to use date as column name. It's a reserved word in every SQL standard and a function and data type name in PostgreSQL.
\n
soup wrap:
According to your description, the query could look like this:
I use LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. NOT EXISTS would be the other good option.
UNION simply doesn't do what you describe.
CREATE TABLE AS
SELECT date, location_code, product_code, quantity
FROM transactions_kitchen k
UNION ALL
SELECT h.date, h.location_code, h.product_code, h.quantity
FROM transactions_admin h
LEFT JOIN transactions_kitchen k USING (location_code, date)
WHERE k.location_code IS NULL;
Use CREATE TABLE AS instead of SELECT INTO.
I quote the manual on SELECT INTO:
CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS
is the recommended syntax, since this form of SELECT INTO is not
available in ECPG or PL/pgSQL, because they interpret the INTO clause
differently. Furthermore, CREATE TABLE AS offers a superset of the
functionality provided by SELECT INTO.
Or, if the target table already exists:
INSERT INTO transactions_combined ()
SELECT ...
I would advise not to use date as column name. It's a reserved word in every SQL standard and a function and data type name in PostgreSQL.
qid & accept id:
(15616278, 15632566)
query:
SQL convert Seconds to Minutes to Hours
soup:
With the help of Steoleary i have managed a solution
\nDECLARE @SecondsToConvert int\nSET @SecondsToConvert = (SELECT (SUM(DATEDIFF(hour,InviteTime,EndTime) * 3600) + SUM(DATEDIFF(minute,InviteTime,EndTime) * 60) + SUM(DATEDIFF(second,InviteTime,EndTime) * 1)) AS [Seconds] \n FROM [LcsCDR].[dbo].[SessionDetailsView]\nWHERE FromUri LIKE '%robert%'\nAND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'\nAND MediaTypes = '16'\nGROUP BY FromUri)\n\n-- Declare variables\n DECLARE @Hours int\n DECLARE @Minutes int\n DECLARE @Seconds int\n\n-- Set the calculations for hour, minute and second\nSET @Hours = @SecondsToConvert/3600\nSET @Minutes = (@SecondsToConvert % 3600) / 60\nSET @Seconds = @SecondsToConvert % 60\n\nSELECT COUNT(*) AS 'Aantal gesprekken'\n,FromUri AS 'Medewerker'\n,@Hours AS 'Uren' ,@Minutes AS 'Minuten' , @Seconds AS 'Seconden'\n FROM [LcsCDR].[dbo].[SessionDetailsView]\nWHERE FromUri LIKE '%robert%'\nAND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'\nAND MediaTypes = '16'\nGROUP BY FromUri\n
\nAs a result, i now get the accurate time.
\n302 robert 28 19 56\n
\n28 hours, 19 minutes and 56 seconds, just like it should be :)
\n
soup wrap:
With the help of Steoleary i have managed a solution
DECLARE @SecondsToConvert int
SET @SecondsToConvert = (SELECT (SUM(DATEDIFF(hour,InviteTime,EndTime) * 3600) + SUM(DATEDIFF(minute,InviteTime,EndTime) * 60) + SUM(DATEDIFF(second,InviteTime,EndTime) * 1)) AS [Seconds]
FROM [LcsCDR].[dbo].[SessionDetailsView]
WHERE FromUri LIKE '%robert%'
AND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'
AND MediaTypes = '16'
GROUP BY FromUri)
-- Declare variables
DECLARE @Hours int
DECLARE @Minutes int
DECLARE @Seconds int
-- Set the calculations for hour, minute and second
SET @Hours = @SecondsToConvert/3600
SET @Minutes = (@SecondsToConvert % 3600) / 60
SET @Seconds = @SecondsToConvert % 60
SELECT COUNT(*) AS 'Aantal gesprekken'
,FromUri AS 'Medewerker'
,@Hours AS 'Uren' ,@Minutes AS 'Minuten' , @Seconds AS 'Seconden'
FROM [LcsCDR].[dbo].[SessionDetailsView]
WHERE FromUri LIKE '%robert%'
AND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'
AND MediaTypes = '16'
GROUP BY FromUri
As a result, i now get the accurate time.
302 robert 28 19 56
28 hours, 19 minutes and 56 seconds, just like it should be :)
qid & accept id:
(15616638, 15616794)
query:
How to remove duplicate rows from a join query in mysql
soup:
Basically, you can filter the result from the product of the two tables via a.Name < b.Name
\nSELECT a.Name Name1, b.Name Name2\nFROM TableName a, TableName b\nWHERE a.Name < b.Name\nORDER BY Name1, Name2\n
\n\n- SQLFiddle Demo
\n
\nOUTPUT
\n╔═══════╦═════════╗\n║ NAME1 ║ NAME2 ║\n╠═══════╬═════════╣\n║ Amit ║ Bhagi ║\n║ Amit ║ Chinmoy ║\n║ Bhagi ║ Chinmoy ║\n╚═══════╩═════════╝\n
\n
soup wrap:
Basically, you can filter the result from the product of the two tables via a.Name < b.Name
SELECT a.Name Name1, b.Name Name2
FROM TableName a, TableName b
WHERE a.Name < b.Name
ORDER BY Name1, Name2
OUTPUT
╔═══════╦═════════╗
║ NAME1 ║ NAME2 ║
╠═══════╬═════════╣
║ Amit ║ Bhagi ║
║ Amit ║ Chinmoy ║
║ Bhagi ║ Chinmoy ║
╚═══════╩═════════╝
qid & accept id:
(15621609, 15621718)
query:
T-SQL Conditional Order By
soup:
CASE is an expression that returns a value. It is not for control-of-flow, like IF. And you can't use IF within a query.
\nUnfortunately, there are some limitations with CASE expressions that make it cumbersome to do what you want. For example, all of the branches in a CASE expression must return the same type, or be implicitly convertible to the same type. I wouldn't try that with strings and dates. You also can't use CASE to specify sort direction.
\nSELECT column_list_please\nFROM dbo.Product -- dbo prefix please\nORDER BY \n CASE WHEN @sortDir = 'asc' AND @sortOrder = 'name' THEN name END,\n CASE WHEN @sortDir = 'asc' AND @sortOrder = 'created_date' THEN created_date END,\n CASE WHEN @sortDir = 'desc' AND @sortOrder = 'name' THEN name END DESC,\n CASE WHEN @sortDir = 'desc' AND @sortOrder = 'created_date' THEN created_date END DESC;\n
\nAn arguably easier solution (especially if this gets more complex) is to use dynamic SQL. To thwart SQL injection you can test the values:
\nIF @sortDir NOT IN ('asc', 'desc')\n OR @sortOrder NOT IN ('name', 'created_date')\nBEGIN\n RAISERROR('Invalid params', 11, 1);\n RETURN;\nEND\n\nDECLARE @sql NVARCHAR(MAX) = N'SELECT column_list_please\n FROM dbo.Product ORDER BY ' + @sortOrder + ' ' + @sortDir;\n\nEXEC sp_executesql @sql;\n
\nAnother plus for dynamic SQL, in spite of all the fear-mongering that is spread about it: you can get the best plan for each sort variation, instead of one single plan that will optimize to whatever sort variation you happened to use first. It also performed best universally in a recent performance comparison I ran:
\nhttp://sqlperformance.com/conditional-order-by
\n
soup wrap:
CASE is an expression that returns a value. It is not for control-of-flow, like IF. And you can't use IF within a query.
Unfortunately, there are some limitations with CASE expressions that make it cumbersome to do what you want. For example, all of the branches in a CASE expression must return the same type, or be implicitly convertible to the same type. I wouldn't try that with strings and dates. You also can't use CASE to specify sort direction.
SELECT column_list_please
FROM dbo.Product -- dbo prefix please
ORDER BY
CASE WHEN @sortDir = 'asc' AND @sortOrder = 'name' THEN name END,
CASE WHEN @sortDir = 'asc' AND @sortOrder = 'created_date' THEN created_date END,
CASE WHEN @sortDir = 'desc' AND @sortOrder = 'name' THEN name END DESC,
CASE WHEN @sortDir = 'desc' AND @sortOrder = 'created_date' THEN created_date END DESC;
An arguably easier solution (especially if this gets more complex) is to use dynamic SQL. To thwart SQL injection you can test the values:
IF @sortDir NOT IN ('asc', 'desc')
OR @sortOrder NOT IN ('name', 'created_date')
BEGIN
RAISERROR('Invalid params', 11, 1);
RETURN;
END
DECLARE @sql NVARCHAR(MAX) = N'SELECT column_list_please
FROM dbo.Product ORDER BY ' + @sortOrder + ' ' + @sortDir;
EXEC sp_executesql @sql;
Another plus for dynamic SQL, in spite of all the fear-mongering that is spread about it: you can get the best plan for each sort variation, instead of one single plan that will optimize to whatever sort variation you happened to use first. It also performed best universally in a recent performance comparison I ran:
http://sqlperformance.com/conditional-order-by
qid & accept id:
(15622474, 15623355)
query:
SQL Rolling Total up to a certain date
soup:
Unfortunately with your table structure of points you will have to unpivot the data. An unpivot takes the data from the multiple columns into rows. Once the data is in the rows, it will be much easier to join, filter the data and total the points for each account. The code to unpivot the data will be similar to this:
\nselect account,\n cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,\n pts\nfrom points\nunpivot\n(\n pts\n for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])\n) unpiv\n
\nSee SQL Fiddle with Demo. The query gives a result similar to this:
\n| ACCOUNT | FULL_DATE | PTS |\n------------------------------\n| 123 | 2011-01-01 | 10 |\n| 123 | 2011-02-01 | 0 |\n| 123 | 2011-03-01 | 0 |\n| 123 | 2011-04-01 | 0 |\n| 123 | 2011-05-01 | 10 |\n
\nOnce the data is in this format, you can join the Customers table to get the total points for each account, so the code will be similar to the following:
\nselect \n c.account, sum(pts) TotalPoints\nfrom customers c\ninner join \n(\n select account,\n cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,\n pts\n from points\n unpivot\n (\n pts\n for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])\n ) unpiv\n) p\n on c.account = p.account\nwhere \n(\n c.enddate = '9999-12-31'\n and full_date >= dateadd(year, -1, getdate()) \n and full_date <= getdate() \n)\nor\n(\n c.enddate <> '9999-12-31'\n and dateadd(year, -1, [enddate]) <= full_date\n and full_date <= [enddate]\n)\ngroup by c.account\n
\n\n
soup wrap:
Unfortunately with your table structure of points you will have to unpivot the data. An unpivot takes the data from the multiple columns into rows. Once the data is in the rows, it will be much easier to join, filter the data and total the points for each account. The code to unpivot the data will be similar to this:
select account,
cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,
pts
from points
unpivot
(
pts
for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])
) unpiv
See SQL Fiddle with Demo. The query gives a result similar to this:
| ACCOUNT | FULL_DATE | PTS |
------------------------------
| 123 | 2011-01-01 | 10 |
| 123 | 2011-02-01 | 0 |
| 123 | 2011-03-01 | 0 |
| 123 | 2011-04-01 | 0 |
| 123 | 2011-05-01 | 10 |
Once the data is in this format, you can join the Customers table to get the total points for each account, so the code will be similar to the following:
select
c.account, sum(pts) TotalPoints
from customers c
inner join
(
select account,
cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,
pts
from points
unpivot
(
pts
for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])
) unpiv
) p
on c.account = p.account
where
(
c.enddate = '9999-12-31'
and full_date >= dateadd(year, -1, getdate())
and full_date <= getdate()
)
or
(
c.enddate <> '9999-12-31'
and dateadd(year, -1, [enddate]) <= full_date
and full_date <= [enddate]
)
group by c.account
qid & accept id:
(15627299, 15627345)
query:
Using 'AND' in a many-to-many relationship
soup:
This problem is commonly known as Relational Division.
\nSELECT a.Name\nFROM [user] a\n INNER JOIN UserInGroup b\n ON a.ID = b.UserID\n INNER JOIN [Group] c\n ON b.groupID = c.TypeId\nWHERE c.Name IN ('Directors','London')\nGROUP BY a.Name\nHAVING COUNT(*) = 2\n
\n\nBut if a UNIQUE constraint was not enforce on GROUP for every USER, DISTINCT keywords is needed to filter out unique groups:
\nSELECT a.Name\nFROM [user] a\n INNER JOIN UserInGroup b\n ON a.ID = b.UserID\n INNER JOIN [Group] c\n ON b.groupID = c.TypeId\nWHERE c.Name IN ('Directors','London')\nGROUP BY a.Name\nHAVING COUNT(DISTINCT c.Name) = 2\n
\n
\nOUTPUT from both queries
\n╔══════╗\n║ NAME ║\n╠══════╣\n║ Bob ║\n╚══════╝\n
\n
soup wrap:
This problem is commonly known as Relational Division.
SELECT a.Name
FROM [user] a
INNER JOIN UserInGroup b
ON a.ID = b.UserID
INNER JOIN [Group] c
ON b.groupID = c.TypeId
WHERE c.Name IN ('Directors','London')
GROUP BY a.Name
HAVING COUNT(*) = 2
But if a UNIQUE constraint was not enforce on GROUP for every USER, DISTINCT keywords is needed to filter out unique groups:
SELECT a.Name
FROM [user] a
INNER JOIN UserInGroup b
ON a.ID = b.UserID
INNER JOIN [Group] c
ON b.groupID = c.TypeId
WHERE c.Name IN ('Directors','London')
GROUP BY a.Name
HAVING COUNT(DISTINCT c.Name) = 2
OUTPUT from both queries
╔══════╗
║ NAME ║
╠══════╣
║ Bob ║
╚══════╝
qid & accept id:
(15650876, 15684494)
query:
Searching Across Multiple Tables
soup:
Your attributes are attached to pages. So, you can search for pages that have certain attributes, by checking if those Attributes exist for a page. Finding the pages would look like this:
\nSelect Page.ID\nFrom Page\nwhere EXISTS\n (Select * \n From Attributes\n Where Page_Id = Page.ID\n and ( (Name = 'Season' and Value = 'Autumn')\n or (Name = 'Flavour' and Value = 'Savory')\n ... etc. ...\n )\n
\nIf you want to find the Links, then you can join this to PAGE_LINK (and even to LINK, if you like).
\nSelect Page.ID\nFrom Page\n Join Page_Link PL on PL.Page_ID = Page.ID\n Join Link on Link.ID = PL.Link_ID\nwhere EXISTS\n (Select * \n From Attributes\n Where Page_Id = Page.ID\n and ( (Name = 'Season' and Value = 'Autumn')\n or (Name = 'Flavour' and Value = 'Savory')\n ... etc. ...\n )\n
\n
soup wrap:
Your attributes are attached to pages. So, you can search for pages that have certain attributes, by checking if those Attributes exist for a page. Finding the pages would look like this:
Select Page.ID
From Page
where EXISTS
(Select *
From Attributes
Where Page_Id = Page.ID
and ( (Name = 'Season' and Value = 'Autumn')
or (Name = 'Flavour' and Value = 'Savory')
... etc. ...
)
If you want to find the Links, then you can join this to PAGE_LINK (and even to LINK, if you like).
Select Page.ID
From Page
Join Page_Link PL on PL.Page_ID = Page.ID
Join Link on Link.ID = PL.Link_ID
where EXISTS
(Select *
From Attributes
Where Page_Id = Page.ID
and ( (Name = 'Season' and Value = 'Autumn')
or (Name = 'Flavour' and Value = 'Savory')
... etc. ...
)
qid & accept id:
(15706765, 15706859)
query:
How can I make three columns my primary key
soup:
ALTER TABLE space ADD PRIMARY KEY(Postal, Number, Houseletter);\n
\nIf a primary key already exists then you want to do this:
\nALTER TABLE space DROP PRIMARY KEY, ADD PRIMARY KEY(Postal, Number, Houseletter);\n
\nif you got duplicate PKs, you can try this:
\nALTER IGNORE TABLE space ADD UNIQUE INDEX idx_name (Postal, Number, Houseletter );\n
\nThis will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this
\nSecond question, your query should look like this :
\nSELECT postal, number, houseletter, furniturevalue, livingspace\nFROM space INNER JOIN furniture\nON ( space.postal = furniture.postal\nAND space.number = furniture.number\nAND space.houseletter = furniture.houseletter)\n
\n
soup wrap:
ALTER TABLE space ADD PRIMARY KEY(Postal, Number, Houseletter);
If a primary key already exists then you want to do this:
ALTER TABLE space DROP PRIMARY KEY, ADD PRIMARY KEY(Postal, Number, Houseletter);
if you got duplicate PKs, you can try this:
ALTER IGNORE TABLE space ADD UNIQUE INDEX idx_name (Postal, Number, Houseletter );
This will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this
Second question, your query should look like this :
SELECT postal, number, houseletter, furniturevalue, livingspace
FROM space INNER JOIN furniture
ON ( space.postal = furniture.postal
AND space.number = furniture.number
AND space.houseletter = furniture.houseletter)
qid & accept id:
(15720109, 15720178)
query:
beginner - obtain the top 3 in sql (taking same total score into account)
soup:
Since MySQL do not support Window Function like any RDBMS has, you can still simulate what DENSE_RANK() can do by using user define variables, eg
\nSELECT a.ID, a.TotalScore, b.Rank\nFROM TableName a\n INNER JOIN\n (\n SELECT TotalScore, @rn := @rn + 1 Rank\n FROM\n (\n SELECT DISTINCT TotalScore\n FROM TableName\n ) a, (SELECT @rn := 0) b\n ORDER BY TotalScore DESC\n ) b ON a.TotalScore = b.TotalScore\nWHERE Rank <= 3\n
\n\nOUTPUT
\n╔════╦════════════╦══════╗\n║ ID ║ TOTALSCORE ║ RANK ║\n╠════╬════════════╬══════╣\n║ 7 ║ 20 ║ 1 ║\n║ 4 ║ 20 ║ 1 ║\n║ 6 ║ 18 ║ 2 ║\n║ 9 ║ 18 ║ 2 ║\n║ 1 ║ 16 ║ 3 ║\n╚════╩════════════╩══════╝\n
\n
soup wrap:
Since MySQL do not support Window Function like any RDBMS has, you can still simulate what DENSE_RANK() can do by using user define variables, eg
SELECT a.ID, a.TotalScore, b.Rank
FROM TableName a
INNER JOIN
(
SELECT TotalScore, @rn := @rn + 1 Rank
FROM
(
SELECT DISTINCT TotalScore
FROM TableName
) a, (SELECT @rn := 0) b
ORDER BY TotalScore DESC
) b ON a.TotalScore = b.TotalScore
WHERE Rank <= 3
OUTPUT
╔════╦════════════╦══════╗
║ ID ║ TOTALSCORE ║ RANK ║
╠════╬════════════╬══════╣
║ 7 ║ 20 ║ 1 ║
║ 4 ║ 20 ║ 1 ║
║ 6 ║ 18 ║ 2 ║
║ 9 ║ 18 ║ 2 ║
║ 1 ║ 16 ║ 3 ║
╚════╩════════════╩══════╝
qid & accept id:
(15736503, 15737262)
query:
Oracle using REGEXP to validate a date field
soup:
Try PL/SQL instead of a regular expression. It will be significantly slower, but will be safer and easier to maintain and extend.\nYou should rely on the Oracle format models to do this correctly. I've seen lots of attempts to validate this information using a regular expression, but\nI rarely see it done correctly.
\nIf you really care about performance, the real answer is to fix your data model.
\nCode and Test Cases:
\n--Function to convert a string to a date, or return null if the format is wrong.\ncreate or replace function validate_date(p_string in string) return date is\nbegin\n return to_date(p_string, 'MONTH DD, YYYY');\nexception when others then\n begin\n return to_date(p_string, 'MM/DD/YYYY');\n exception when others then\n begin\n return to_date(p_string, 'DD-MON-RR');\n exception when others then\n return null;\n end;\n end;\nend;\n/\n\n--Test individual values\nselect validate_date('JULY 31, 2009') from dual;\n2009-07-31\nselect validate_date('7/31/2009') from dual;\n2009-07-31\nselect validate_date('31-JUL-09') from dual;\n2009-07-31\nselect validate_date('2009-07-31') from dual;\n\n
\nSimple Performance Test:
\n--Create table to hold test data\ncreate table test1(a_date varchar2(1000)) nologging;\n\n--Insert 10 million rows\nbegin\n for i in 1 .. 100 loop\n insert /*+ append */ into test1\n select to_char(sysdate+level, 'MM/DD/YYYY') from dual connect by level <= 100000;\n\n commit;\n end loop;\nend;\n/\n\n--"Warm up" the database, run this a few times, see how long a count takes.\n--Best case time to count: 2.3 seconds\nselect count(*) from test1;\n\n\n--How long does it take to convert all those strings?\n--6 minutes... ouch\nselect count(*)\nfrom test1\nwhere validate_date(a_date) is not null;\n
\n
soup wrap:
Try PL/SQL instead of a regular expression. It will be significantly slower, but will be safer and easier to maintain and extend.
You should rely on the Oracle format models to do this correctly. I've seen lots of attempts to validate this information using a regular expression, but
I rarely see it done correctly.
If you really care about performance, the real answer is to fix your data model.
Code and Test Cases:
--Function to convert a string to a date, or return null if the format is wrong.
create or replace function validate_date(p_string in string) return date is
begin
return to_date(p_string, 'MONTH DD, YYYY');
exception when others then
begin
return to_date(p_string, 'MM/DD/YYYY');
exception when others then
begin
return to_date(p_string, 'DD-MON-RR');
exception when others then
return null;
end;
end;
end;
/
--Test individual values
select validate_date('JULY 31, 2009') from dual;
2009-07-31
select validate_date('7/31/2009') from dual;
2009-07-31
select validate_date('31-JUL-09') from dual;
2009-07-31
select validate_date('2009-07-31') from dual;
Simple Performance Test:
--Create table to hold test data
create table test1(a_date varchar2(1000)) nologging;
--Insert 10 million rows
begin
for i in 1 .. 100 loop
insert /*+ append */ into test1
select to_char(sysdate+level, 'MM/DD/YYYY') from dual connect by level <= 100000;
commit;
end loop;
end;
/
--"Warm up" the database, run this a few times, see how long a count takes.
--Best case time to count: 2.3 seconds
select count(*) from test1;
--How long does it take to convert all those strings?
--6 minutes... ouch
select count(*)
from test1
where validate_date(a_date) is not null;
qid & accept id:
(15742348, 15742443)
query:
devide operation in sql
soup:
Following should be your query -
\nSelect * from employee where projectname = (select projectname from employee where LastName = 'Jones');\n
\nWe have not used in clause as Jones is working in one project.
\nIf he is working in multiple projects
\nthen query can be -
\nSelect * from employee where projectname in (select projectname from employee where LastName = 'Jones');\n
\nThanks
\n
soup wrap:
Following should be your query -
Select * from employee where projectname = (select projectname from employee where LastName = 'Jones');
We have not used in clause as Jones is working in one project.
If he is working in multiple projects
then query can be -
Select * from employee where projectname in (select projectname from employee where LastName = 'Jones');
Thanks
qid & accept id:
(15743183, 15743244)
query:
How to fetch Distinct Title from the GROUP_CONCAT as Left Join without repeating other tables' data?
soup:
You need to use GROUP BY clause because GROUP_CONCAT() is an aggregate function.
\nSELECT Title, GROUP_CONCAT(FEAT) FeatList\nFROM Prop_Feat\nGROUP BY Title\n
\n\n- SQLFiddle Demo
\n
\nOUTPUT
\n╔════════════╦═══════════════════╗\n║ TITLE ║ FEATLIST ║\n╠════════════╬═══════════════════╣\n║ Appliances ║ Gas Range,Fridge ║\n║ Interior ║ Hardwood Flooring ║\n╚════════════╩═══════════════════╝\n
\n
soup wrap:
You need to use GROUP BY clause because GROUP_CONCAT() is an aggregate function.
SELECT Title, GROUP_CONCAT(FEAT) FeatList
FROM Prop_Feat
GROUP BY Title
OUTPUT
╔════════════╦═══════════════════╗
║ TITLE ║ FEATLIST ║
╠════════════╬═══════════════════╣
║ Appliances ║ Gas Range,Fridge ║
║ Interior ║ Hardwood Flooring ║
╚════════════╩═══════════════════╝
qid & accept id:
(15758509, 15758945)
query:
Count references to own ID in MySQL with Grouping
soup:
Assuming for response the parentId is the postId for the response then you can achieve this by the following way
\nQuery 1:
\nSELECT\n a.user,\n SUM(IF(a.parent_id = 0, 1, 0)) as 'NewPosts',\n SUM(IF(a.parent_id > 0, 1,0)) as 'Responses',\n COUNT(a.parent_id) as 'TotalPosts',\n SUM(IF(a.user = b.user, 1, 0)) as 'SelfResponses'\nFROM \n Table1 a\nLEFT JOIN\n Table1 b\nON \n a.parent_id = b.id\nGROUP BY \n a.user\n
\nResults:
\n| USER | NEWPOSTS | RESPONSES | TOTALPOSTS | SELFRESPONSES |\n--------------------------------------------------------------\n| Henry | 1 | 2 | 3 | 1 |\n| Joseph | 1 | 0 | 1 | 0 |\n
\nSQL FIDDLE
\nHope this helps
\n
soup wrap:
Assuming for response the parentId is the postId for the response then you can achieve this by the following way
Query 1:
SELECT
a.user,
SUM(IF(a.parent_id = 0, 1, 0)) as 'NewPosts',
SUM(IF(a.parent_id > 0, 1,0)) as 'Responses',
COUNT(a.parent_id) as 'TotalPosts',
SUM(IF(a.user = b.user, 1, 0)) as 'SelfResponses'
FROM
Table1 a
LEFT JOIN
Table1 b
ON
a.parent_id = b.id
GROUP BY
a.user
Results:
| USER | NEWPOSTS | RESPONSES | TOTALPOSTS | SELFRESPONSES |
--------------------------------------------------------------
| Henry | 1 | 2 | 3 | 1 |
| Joseph | 1 | 0 | 1 | 0 |
SQL FIDDLE
Hope this helps
qid & accept id:
(15808243, 15809796)
query:
How to Select master table data and select referance table top one data sql query
soup:
In SQLServer2005+ use option with OUTER APPLY operator
\nSELECT *\nFROM master t1 OUTER APPLY (\n SELECT TOP 1 t2.Col1, t2.Col2 ...\n FROM child t2\n WHERE t1.Id = t2.Id\n ORDER BY t2.CreatedDate DESC\n ) o\n
\nOR option with CTE and ROW_NUMBER() ranking function
\n;WITH cte AS\n ( \n SELECT *, \n ROW_NUMBER() OVER(PARTITION BY t1.Id ORDER BY t2.CreatedDate DESC) AS rn\n FROM master t1 JOIN child t2 ON t1.Id = t2.Id\n )\n SELECT *\n FROM cte\n WHERE rn = 1\n
\n
soup wrap:
In SQLServer2005+ use option with OUTER APPLY operator
SELECT *
FROM master t1 OUTER APPLY (
SELECT TOP 1 t2.Col1, t2.Col2 ...
FROM child t2
WHERE t1.Id = t2.Id
ORDER BY t2.CreatedDate DESC
) o
OR option with CTE and ROW_NUMBER() ranking function
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY t1.Id ORDER BY t2.CreatedDate DESC) AS rn
FROM master t1 JOIN child t2 ON t1.Id = t2.Id
)
SELECT *
FROM cte
WHERE rn = 1
qid & accept id:
(15834569, 15834758)
query:
How to bulk insert only new rows in PostreSQL
soup:
Import data
\nCOPY everything to a temporary staging table and insert only new titles into your target table.
\nCREATE TEMP TABLE tmp(title text);\n\nCOPY tmp FROM 'path/to/file.csv';\nANALYZE tmp;\n\nINSERT INTO tbl\nSELECT DISTINCT tmp.title\nFROM tmp \nLEFT JOIN tbl USING (title)\nWHERE tbl.title IS NULL;\n
\nIDs should be generated automatically with a serial column tbl_id in tbl.
\nThe LEFT JOIN / IS NULL construct disqualifies already existing titles. NOT EXISTS would be another possibility.
\nDISTINCT prevents duplicates in the incoming data in the temporary table tmp.
\nANALYZE is useful to make sure the query planner picks a sensible plan, and temporary tables are not analyzed by autovacuum.
\nSince you have 3 million items, it might pay to raise the setting for temp_buffer (for this session only):
\nSET temp_buffers = 1000MB;\n
\nOr however much you can afford and is enough to hold the temp table in RAM, which is much faster. Note: must be done first in the session - before any temp objects are created.
\nRetrieve IDs
\nTo see all IDs for the imported data:
\nSELECT tbl.tbl_id, tbl.title\nFROM tbl\nJOIN tmp USING (title)\n
\nIn the same session! A temporary table is dropped automatically at the end of the session.
\n
soup wrap:
Import data
COPY everything to a temporary staging table and insert only new titles into your target table.
CREATE TEMP TABLE tmp(title text);
COPY tmp FROM 'path/to/file.csv';
ANALYZE tmp;
INSERT INTO tbl
SELECT DISTINCT tmp.title
FROM tmp
LEFT JOIN tbl USING (title)
WHERE tbl.title IS NULL;
IDs should be generated automatically with a serial column tbl_id in tbl.
The LEFT JOIN / IS NULL construct disqualifies already existing titles. NOT EXISTS would be another possibility.
DISTINCT prevents duplicates in the incoming data in the temporary table tmp.
ANALYZE is useful to make sure the query planner picks a sensible plan, and temporary tables are not analyzed by autovacuum.
Since you have 3 million items, it might pay to raise the setting for temp_buffer (for this session only):
SET temp_buffers = 1000MB;
Or however much you can afford and is enough to hold the temp table in RAM, which is much faster. Note: must be done first in the session - before any temp objects are created.
Retrieve IDs
To see all IDs for the imported data:
SELECT tbl.tbl_id, tbl.title
FROM tbl
JOIN tmp USING (title)
In the same session! A temporary table is dropped automatically at the end of the session.
qid & accept id:
(15836482, 15837317)
query:
Query to replace null values from the table
soup:
This query should work even if there are several records in a row with NULL
\nQuery:
\n\nUPDATE Table1\nSET car_name = (SELECT t1.car_name\n FROM (SELECT * FROM Table1) t1\n WHERE t1.id < Table1.id\n AND t1.car_name is not null\n ORDER BY t1.id DESC\n LIMIT 1)\nWHERE car_name is null\n
\nResult:
\n| ID | CAR_NAME | MODEL | YEAR |\n--------------------------------\n| 1 | a | abc | 2000 |\n| 2 | b | xyx | 2001 |\n| 3 | b | asd | 2003 |\n| 4 | c | qwe | 2004 |\n| 5 | c | xds | 2005 |\n| 6 | d | asd | 2006 |\n
\n
soup wrap:
This query should work even if there are several records in a row with NULL
Query:
UPDATE Table1
SET car_name = (SELECT t1.car_name
FROM (SELECT * FROM Table1) t1
WHERE t1.id < Table1.id
AND t1.car_name is not null
ORDER BY t1.id DESC
LIMIT 1)
WHERE car_name is null
Result:
| ID | CAR_NAME | MODEL | YEAR |
--------------------------------
| 1 | a | abc | 2000 |
| 2 | b | xyx | 2001 |
| 3 | b | asd | 2003 |
| 4 | c | qwe | 2004 |
| 5 | c | xds | 2005 |
| 6 | d | asd | 2006 |
qid & accept id:
(15872394, 15872582)
query:
Using multiple joins (e.g left join)
soup:
Let be table B:
\nid\n----\n1\n2\n3\n
\nLet be table C
\nid name\n------------\n1 John\n2 Mary\n2 Anne\n3 Stef\n
\nAny id from b is matched with ids from c, then id=2 will be matched twice. So a left join on id will return 4 rows even if base table B has 3 rows.
\nNow look at a more evil example:
\nTable B
\nid\n----\n1\n2\n2\n3\n4\n
\ntable C
\nid name\n------------\n1 John\n2 Mary\n2 Anne\n3 Stef\n
\nEvery id from b is matched with ids from c, then first id=2 will be matched twice and second id=2 will be matched twice so the result of
\nselect b.id, c.name\nfrom b left join c on (b.id = c.id)\n
\nwill be
\nid name\n------------\n1 John\n2 Mary\n2 Mary\n2 Anne\n2 Anne\n3 Stef\n4 (null)\n
\nThe id=4 is not matched but appears in the result because is a left join.
\n
soup wrap:
Let be table B:
id
----
1
2
3
Let be table C
id name
------------
1 John
2 Mary
2 Anne
3 Stef
Any id from b is matched with ids from c, then id=2 will be matched twice. So a left join on id will return 4 rows even if base table B has 3 rows.
Now look at a more evil example:
Table B
id
----
1
2
2
3
4
table C
id name
------------
1 John
2 Mary
2 Anne
3 Stef
Every id from b is matched with ids from c, then first id=2 will be matched twice and second id=2 will be matched twice so the result of
select b.id, c.name
from b left join c on (b.id = c.id)
will be
id name
------------
1 John
2 Mary
2 Mary
2 Anne
2 Anne
3 Stef
4 (null)
The id=4 is not matched but appears in the result because is a left join.
qid & accept id:
(15948208, 15948748)
query:
Group dates by their day of week
soup:
I think to get exactly what you want in one query is not easily possible. But I came to something that is nearly your desired result:
\nSELECT TIME(air), title, GROUP_CONCAT(DAYOFWEEK(air)) \nFROM programs WHERE title = 'Factor' \nGROUP BY TIME(air)\n
\nThis gives me the following result:
\nTIME(air) title GROUP_CONCAT(DAYOFWEEK(air))\n-------------------------------------------------\n14:00:00 Factor 3\n17:00:00 Factor 2,3,4\n
\nWith this result you can easily utilize php to get your desired result. Results like "monday, wednesday, friday-saturday" are possible with this too.
\n
soup wrap:
I think to get exactly what you want in one query is not easily possible. But I came to something that is nearly your desired result:
SELECT TIME(air), title, GROUP_CONCAT(DAYOFWEEK(air))
FROM programs WHERE title = 'Factor'
GROUP BY TIME(air)
This gives me the following result:
TIME(air) title GROUP_CONCAT(DAYOFWEEK(air))
-------------------------------------------------
14:00:00 Factor 3
17:00:00 Factor 2,3,4
With this result you can easily utilize php to get your desired result. Results like "monday, wednesday, friday-saturday" are possible with this too.
qid & accept id:
(15964439, 15965140)
query:
Efficient way to insert multiple rows and assigning each one's Id to another table's column
soup:
You can use the OUTPUT clause to capture identities from multiple inserted rows. In the following, I'm assuming that ServiceName and RequestName are sufficient to uniquely identify values being passed in. If they're not, then hopefully you can adapt the below (you didn't really define in the question any usable non-identity column names or values):
\nFirst, set up the tables:
\ncreate table Requests (RId int IDENTITY(1,1) not null primary key,RequestName varchar(10) not null)\ncreate table Services (SId int IDENTITY(1,1) not null primary key,ServiceName varchar(10) not null)\ncreate table Mappings (MId int IDENTITY(1,1) not null,RId int not null references Requests,SId int not null references Services)\n
\nNow declare what would be the TVP passed into the stored procedure (note that this script and the next need to be run together in this simulation):
\ndeclare @NewValues table (\n RequestName varchar(10) not null,\n ServiceName varchar(10) not null\n)\ninsert into @NewValues (RequestName,ServiceName) values\n('R1','S1'),\n('R1','S2'),\n('R1','S3'),\n('R2','S4'),\n('R2','S5'),\n('R3','S6')\n
\nAnd then, inside the SP, you'd have code like the following:
\ndeclare @TmpRIDs table (RequestName varchar(10) not null,RId int not null)\ndeclare @TmpSIDs table (ServiceName varchar(10) not null,SId int not null)\n\n;merge into Requests r using (select distinct RequestName from @NewValues) n on 1=0\nwhen not matched then insert (RequestName) values (n.RequestName)\noutput n.RequestName,inserted.RId into @TmpRIDs;\n\n;merge into Services s using (select distinct ServiceName from @NewValues) n on 1=0\nwhen not matched then insert (ServiceName) values (n.ServiceName)\noutput n.ServiceName,inserted.SId into @TmpSIDs;\n\ninsert into Mappings (RId,SId)\nselect RId,SId\nfrom @NewValues nv\n inner join\n @TmpRIds r\n on\n nv.RequestName = r.RequestName \n inner join\n @TmpSIDs s\n on\n nv.ServiceName = s.ServiceName;\n
\nAnd to check the result:
\nselect * from Mappings\n
\nproduces:
\nMId RId SId\n----------- ----------- -----------\n1 1 1\n2 1 2\n3 1 3\n4 2 4\n5 2 5\n6 3 6\n
\nWhich is similar to what you have in your question.
\nThe tricky part of the code is (mis-)using the MERGE statement, in order to be able to capture columns from both the inserted table (which contains the newly generated IDENTITY values) and the table that's acting as the source of rows. The OUTPUT clause for the INSERT statement only allows reference to the inserted pseudo-table, so it can't be used here.
\n
soup wrap:
You can use the OUTPUT clause to capture identities from multiple inserted rows. In the following, I'm assuming that ServiceName and RequestName are sufficient to uniquely identify values being passed in. If they're not, then hopefully you can adapt the below (you didn't really define in the question any usable non-identity column names or values):
First, set up the tables:
create table Requests (RId int IDENTITY(1,1) not null primary key,RequestName varchar(10) not null)
create table Services (SId int IDENTITY(1,1) not null primary key,ServiceName varchar(10) not null)
create table Mappings (MId int IDENTITY(1,1) not null,RId int not null references Requests,SId int not null references Services)
Now declare what would be the TVP passed into the stored procedure (note that this script and the next need to be run together in this simulation):
declare @NewValues table (
RequestName varchar(10) not null,
ServiceName varchar(10) not null
)
insert into @NewValues (RequestName,ServiceName) values
('R1','S1'),
('R1','S2'),
('R1','S3'),
('R2','S4'),
('R2','S5'),
('R3','S6')
And then, inside the SP, you'd have code like the following:
declare @TmpRIDs table (RequestName varchar(10) not null,RId int not null)
declare @TmpSIDs table (ServiceName varchar(10) not null,SId int not null)
;merge into Requests r using (select distinct RequestName from @NewValues) n on 1=0
when not matched then insert (RequestName) values (n.RequestName)
output n.RequestName,inserted.RId into @TmpRIDs;
;merge into Services s using (select distinct ServiceName from @NewValues) n on 1=0
when not matched then insert (ServiceName) values (n.ServiceName)
output n.ServiceName,inserted.SId into @TmpSIDs;
insert into Mappings (RId,SId)
select RId,SId
from @NewValues nv
inner join
@TmpRIds r
on
nv.RequestName = r.RequestName
inner join
@TmpSIDs s
on
nv.ServiceName = s.ServiceName;
And to check the result:
select * from Mappings
produces:
MId RId SId
----------- ----------- -----------
1 1 1
2 1 2
3 1 3
4 2 4
5 2 5
6 3 6
Which is similar to what you have in your question.
The tricky part of the code is (mis-)using the MERGE statement, in order to be able to capture columns from both the inserted table (which contains the newly generated IDENTITY values) and the table that's acting as the source of rows. The OUTPUT clause for the INSERT statement only allows reference to the inserted pseudo-table, so it can't be used here.
qid & accept id:
(16036991, 16037053)
query:
Reference something in the select clause SQL
soup:
No, you cannot used the alias that was generated on the same level on the SELECT statement.
\nHere are the possible ways to accomplish.
\nUsing the original formula:
\nselect sum([some calculation]) as x,\n sum([some other calculation]) as y,\n sum([some calculation]) / sum([some other calculation]) as z\nfrom tableName\n
\nor by using subquery:
\nSELECT x,\n y,\n x/y z\nFROM \n(\n select sum([some calculation]) as x,\n sum([some other calculation]) as y\n from tableName\n) s\n
\n
soup wrap:
No, you cannot used the alias that was generated on the same level on the SELECT statement.
Here are the possible ways to accomplish.
Using the original formula:
select sum([some calculation]) as x,
sum([some other calculation]) as y,
sum([some calculation]) / sum([some other calculation]) as z
from tableName
or by using subquery:
SELECT x,
y,
x/y z
FROM
(
select sum([some calculation]) as x,
sum([some other calculation]) as y
from tableName
) s
qid & accept id:
(16053215, 16053891)
query:
find second (or nth) latest value in oracle
soup:
If I understand you right, then try something like this:
\nselect * \nfrom(\n select sent_by, row_number() over (order by sent_by desc, id asc) row_num\n from MY_TEST) t\nwhere row_num = 2 -- or 3 ... n\n
\n
\nUPDATE
\nTry this:
\nselect * \nfrom(\n select sent_by, \n rank() over (order by max(id) desc) rk\n from MY_TEST\n group by sent_by) t\nwhere rk = 2 -- or 3 .. n\n
\n\n
soup wrap:
If I understand you right, then try something like this:
select *
from(
select sent_by, row_number() over (order by sent_by desc, id asc) row_num
from MY_TEST) t
where row_num = 2 -- or 3 ... n
UPDATE
Try this:
select *
from(
select sent_by,
rank() over (order by max(id) desc) rk
from MY_TEST
group by sent_by) t
where rk = 2 -- or 3 .. n
qid & accept id:
(16053425, 16053663)
query:
Select column names that match a criteria (MySQL)
soup:
If I understand your question correctly, maybe you need something like this:
\nSELECT 'col_a' col\nFROM yourtable\nWHERE col_a\nUNION\nSELECT 'col_b'\nFROM yourtable\nWHERE col_b\nUNION\nSELECT 'col_c'\nFROM yourtable\nWHERE col_c\n...\n
\nthis will return all columns in your table that have at least one row where they are true.
\nOr maybe this:
\nSELECT\n id,\n CONCAT_WS(', ',\n CASE WHEN col_a THEN 'col_a' END,\n CASE WHEN col_b THEN 'col_b' END,\n CASE WHEN col_c THEN 'col_c' END) cols\nFROM\n yourtable\n
\nthat will return rows in this format:
\n| ID | COLS |\n----------------------------\n| 1 | col_a, col_c |\n| 2 | col_a, col_b, col_c |\n| 3 | |\n| 4 | col_c |\n...\n
\nPlease see fiddle here. And if you need to do it dynamically, you could use this prepared statement:
\nSELECT\n CONCAT(\n 'SELECT id, CONCAT_WS(\', \',',\n GROUP_CONCAT(\n CONCAT('CASE WHEN ',\n `COLUMN_NAME`,\n ' THEN \'',\n `COLUMN_NAME`,\n '\' END')),\n ') cols FROM yourtable'\n )\nFROM\n `INFORMATION_SCHEMA`.`COLUMNS` \nWHERE\n `TABLE_NAME`='yourtable'\n AND COLUMN_NAME!='id'\nINTO @sql;\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\n
\nFiddle here.
\n
soup wrap:
If I understand your question correctly, maybe you need something like this:
SELECT 'col_a' col
FROM yourtable
WHERE col_a
UNION
SELECT 'col_b'
FROM yourtable
WHERE col_b
UNION
SELECT 'col_c'
FROM yourtable
WHERE col_c
...
this will return all columns in your table that have at least one row where they are true.
Or maybe this:
SELECT
id,
CONCAT_WS(', ',
CASE WHEN col_a THEN 'col_a' END,
CASE WHEN col_b THEN 'col_b' END,
CASE WHEN col_c THEN 'col_c' END) cols
FROM
yourtable
that will return rows in this format:
| ID | COLS |
----------------------------
| 1 | col_a, col_c |
| 2 | col_a, col_b, col_c |
| 3 | |
| 4 | col_c |
...
Please see fiddle here. And if you need to do it dynamically, you could use this prepared statement:
SELECT
CONCAT(
'SELECT id, CONCAT_WS(\', \',',
GROUP_CONCAT(
CONCAT('CASE WHEN ',
`COLUMN_NAME`,
' THEN \'',
`COLUMN_NAME`,
'\' END')),
') cols FROM yourtable'
)
FROM
`INFORMATION_SCHEMA`.`COLUMNS`
WHERE
`TABLE_NAME`='yourtable'
AND COLUMN_NAME!='id'
INTO @sql;
PREPARE stmt FROM @sql;
EXECUTE stmt;
Fiddle here.
qid & accept id:
(16093468, 16093586)
query:
Over lapping in SQL
soup:
My solution starts by generating all possible pairs of applications that are of interest. This is the driver subquery.
\nIt then joins in the original data for each of the apps.
\nFinally, it uses count(distinct) to count the distinct users that match between the two lists.
\nselect pairs.app1, pairs.app2,\n COUNT(distinct case when tleft.user = tright.user then tleft.user end) as NumCommonUsers\nfrom (select t1.app as app1, t2.app as app2\n from (select distinct app\n from t\n ) t1 cross join\n (select distinct app\n from t\n ) t2\n where t1.app <= t2.app\n ) pairs left outer join\n t tleft\n on tleft.app = pairs.app1 left outer join\n t tright\n on tright.app = pairs.app2\ngroup by pairs.app1, pairs.app2\n
\nYou could move the conditional comparison in the count to the joins and just use count(distinct):
\nselect pairs.app1, pairs.app2,\n COUNT(distinct tleft.user) as NumCommonUsers\nfrom (select t1.app as app1, t2.app as app2\n from (select distinct app\n from t\n ) t1 cross join\n (select distinct app\n from t\n ) t2\n where t1.app <= t2.app\n ) pairs left outer join\n t tleft\n on tleft.app = pairs.app1 left outer join\n t tright\n on tright.app = pairs.app2 and\n tright.user = tleft.user\ngroup by pairs.app1, pairs.app2\n
\nI prefer the first method because it is more explicit on what is being counted.
\nThis is standard SQL, so it should work on Vertica.
\n
soup wrap:
My solution starts by generating all possible pairs of applications that are of interest. This is the driver subquery.
It then joins in the original data for each of the apps.
Finally, it uses count(distinct) to count the distinct users that match between the two lists.
select pairs.app1, pairs.app2,
COUNT(distinct case when tleft.user = tright.user then tleft.user end) as NumCommonUsers
from (select t1.app as app1, t2.app as app2
from (select distinct app
from t
) t1 cross join
(select distinct app
from t
) t2
where t1.app <= t2.app
) pairs left outer join
t tleft
on tleft.app = pairs.app1 left outer join
t tright
on tright.app = pairs.app2
group by pairs.app1, pairs.app2
You could move the conditional comparison in the count to the joins and just use count(distinct):
select pairs.app1, pairs.app2,
COUNT(distinct tleft.user) as NumCommonUsers
from (select t1.app as app1, t2.app as app2
from (select distinct app
from t
) t1 cross join
(select distinct app
from t
) t2
where t1.app <= t2.app
) pairs left outer join
t tleft
on tleft.app = pairs.app1 left outer join
t tright
on tright.app = pairs.app2 and
tright.user = tleft.user
group by pairs.app1, pairs.app2
I prefer the first method because it is more explicit on what is being counted.
This is standard SQL, so it should work on Vertica.
qid & accept id:
(16127878, 16128127)
query:
Inserting data from one table(triplestore) to another(property table)
soup:
This is easy if you have a known, fixed set of properties. If you do not have a known set of fixed properties you have to generate dynamic SQL, either from your app, from PL/PgSQL or using the crosstab function from the tablefunc extension.
\nFor fixed property sets you can self-join:
\nhttp://sqlfiddle.com/#!12/391b7/6
\nSELECT p1."Subject", p1."Object" AS "prop1", p2."Object" AS "prop2"\nFROM triplestore p1\nINNER JOIN triplestore p2 ON (p1."Subject" = p2."Subject")\nWHERE p1."Property" = 'prop1'\n AND p2."Property" = 'prop2'\nORDER BY p1."Subject";\n\nSELECT p1."Subject", p1."Object" AS "prop1"\nFROM triplestore p1\nWHERE p1."Property" = 'prop3'\nORDER BY p1."Subject";\n
\nTo turn these into INSERTs simply use INSERT ... SELECT eg:
\nINSERT INTO "Property Table 1"\nSELECT p1."Subject", p1."Object" AS "prop1"\nFROM triplestore p1\nWHERE p1."Property" = 'prop3'\nORDER BY p1."Subject";\n
\n
soup wrap:
This is easy if you have a known, fixed set of properties. If you do not have a known set of fixed properties you have to generate dynamic SQL, either from your app, from PL/PgSQL or using the crosstab function from the tablefunc extension.
For fixed property sets you can self-join:
http://sqlfiddle.com/#!12/391b7/6
SELECT p1."Subject", p1."Object" AS "prop1", p2."Object" AS "prop2"
FROM triplestore p1
INNER JOIN triplestore p2 ON (p1."Subject" = p2."Subject")
WHERE p1."Property" = 'prop1'
AND p2."Property" = 'prop2'
ORDER BY p1."Subject";
SELECT p1."Subject", p1."Object" AS "prop1"
FROM triplestore p1
WHERE p1."Property" = 'prop3'
ORDER BY p1."Subject";
To turn these into INSERTs simply use INSERT ... SELECT eg:
INSERT INTO "Property Table 1"
SELECT p1."Subject", p1."Object" AS "prop1"
FROM triplestore p1
WHERE p1."Property" = 'prop3'
ORDER BY p1."Subject";
qid & accept id:
(16136119, 16136452)
query:
MySQL - Combining multiple selects from same table into one result table with a group by
soup:
MySQL does not have a PIVOT function but you can convert the rows of data into columns using an aggregate function with a CASE expression.
\nIf you have a limited number of years, then you can hard-code the query:
\nselect meterNo,\n sum(case when year(readingDate) = 2009 then readingValue else 0 end) `2009`,\n sum(case when year(readingDate) = 2010 then readingValue else 0 end) `2010`,\n sum(case when year(readingDate) = 2011 then readingValue else 0 end) `2011`,\n sum(case when year(readingDate) = 2012 then readingValue else 0 end) `2012`,\n sum(case when year(readingDate) = 2013 then readingValue else 0 end) `2013`\nfrom readings\ngroup by meterno;\n
\n\nBut if you are going to have an unknown number of values or what the query to adjust as new years are added to the database, then you can use a prepared statement to generate dynamic SQL:
\nSET @sql = NULL;\nSELECT\n GROUP_CONCAT(DISTINCT\n CONCAT(\n 'sum(CASE WHEN year(readingDate) = ',\n year(readingDate),\n ' THEN readingValue else 0 END) AS `',\n year(readingDate), '`'\n )\n ) INTO @sql\nFROM readings;\n\nSET @sql \n = CONCAT('SELECT meterno, ', @sql, ' \n from readings\n group by meterno');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
\nSee SQL Fiddle with Demo. Both give the result:
\n| METERNO | 2009 | 2010 | 2012 | 2013 | 2011 |\n----------------------------------------------\n| 1 | 90 | 180 | 0 | 90 | 90 |\n| 2 | 50 | 0 | 90 | 0 | 0 |\n| 3 | 80 | 40 | 90 | 90 | 0 |\n
\nAs a side note, if you want null to display in the rows without values instead of the zeros, then you can remove the else 0 (see Demo)
\n
soup wrap:
MySQL does not have a PIVOT function but you can convert the rows of data into columns using an aggregate function with a CASE expression.
If you have a limited number of years, then you can hard-code the query:
select meterNo,
sum(case when year(readingDate) = 2009 then readingValue else 0 end) `2009`,
sum(case when year(readingDate) = 2010 then readingValue else 0 end) `2010`,
sum(case when year(readingDate) = 2011 then readingValue else 0 end) `2011`,
sum(case when year(readingDate) = 2012 then readingValue else 0 end) `2012`,
sum(case when year(readingDate) = 2013 then readingValue else 0 end) `2013`
from readings
group by meterno;
But if you are going to have an unknown number of values or what the query to adjust as new years are added to the database, then you can use a prepared statement to generate dynamic SQL:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'sum(CASE WHEN year(readingDate) = ',
year(readingDate),
' THEN readingValue else 0 END) AS `',
year(readingDate), '`'
)
) INTO @sql
FROM readings;
SET @sql
= CONCAT('SELECT meterno, ', @sql, '
from readings
group by meterno');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
See SQL Fiddle with Demo. Both give the result:
| METERNO | 2009 | 2010 | 2012 | 2013 | 2011 |
----------------------------------------------
| 1 | 90 | 180 | 0 | 90 | 90 |
| 2 | 50 | 0 | 90 | 0 | 0 |
| 3 | 80 | 40 | 90 | 90 | 0 |
As a side note, if you want null to display in the rows without values instead of the zeros, then you can remove the else 0 (see Demo)
qid & accept id:
(16143769, 16147021)
query:
Referencing the value of the previous calculcated value in Oracle
soup:
A variation on Ben's answer to use a windowing clause, which seems to take care of your updated requirements:
\nselect eventno, eventtype, totalcharge, remainingqty, outqty,\n initial_charge - case when running_outqty = 0 then 0\n else (running_outqty / 100) * initial_charge end as remainingcharge\nfrom (\n select eventno, eventtype, totalcharge, remainingqty, outqty,\n first_value(totalcharge) over (partition by null\n order by eventno desc) as initial_charge,\n sum(outqty) over (partition by null\n order by eventno desc\n rows between unbounded preceding and current row)\n as running_outqty\n from t42\n);\n
\nExcept it gives 19.2 instead of 12.8 for the third row, but that's what your formula suggests it should be:
\n EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE\n---------- ----- ----------- ------------ ---------- ---------------\n 4 ACQ 32 100 0 32\n 3 OTHER 100 0 32\n 2 OUT 60 40 19.2\n 1 OUT 0 60 0\n
\nIf I add another split so it goes from 60 to zero in two steps, with another non-OUT record in the mix too:
\n EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE\n---------- ----- ----------- ------------ ---------- ---------------\n 6 ACQ 32 100 0 32\n 5 OTHER 100 0 32\n 4 OUT 60 40 19.2\n 3 OUT 30 30 9.6\n 2 OTHER 30 0 9.6\n 1 OUT 0 30 0\n
\nThere's an assumption that the remaining quantity is consistent and you can effectively track a running total of what has gone before, but from the data you've shown that looks plausible. The inner query calculates that running total for each row, and the outer query does the calculation; that could be condensed but is hopefully clearer like this...
\n
soup wrap:
A variation on Ben's answer to use a windowing clause, which seems to take care of your updated requirements:
select eventno, eventtype, totalcharge, remainingqty, outqty,
initial_charge - case when running_outqty = 0 then 0
else (running_outqty / 100) * initial_charge end as remainingcharge
from (
select eventno, eventtype, totalcharge, remainingqty, outqty,
first_value(totalcharge) over (partition by null
order by eventno desc) as initial_charge,
sum(outqty) over (partition by null
order by eventno desc
rows between unbounded preceding and current row)
as running_outqty
from t42
);
Except it gives 19.2 instead of 12.8 for the third row, but that's what your formula suggests it should be:
EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE
---------- ----- ----------- ------------ ---------- ---------------
4 ACQ 32 100 0 32
3 OTHER 100 0 32
2 OUT 60 40 19.2
1 OUT 0 60 0
If I add another split so it goes from 60 to zero in two steps, with another non-OUT record in the mix too:
EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE
---------- ----- ----------- ------------ ---------- ---------------
6 ACQ 32 100 0 32
5 OTHER 100 0 32
4 OUT 60 40 19.2
3 OUT 30 30 9.6
2 OTHER 30 0 9.6
1 OUT 0 30 0
There's an assumption that the remaining quantity is consistent and you can effectively track a running total of what has gone before, but from the data you've shown that looks plausible. The inner query calculates that running total for each row, and the outer query does the calculation; that could be condensed but is hopefully clearer like this...
qid & accept id:
(16184493, 16184689)
query:
SQL Server Insert table into same table?
soup:
Because Primary keys must contain unique value and cannot contain NULL values. so use following queries if your table don't have primary key.
\nfor all columns use:
\nINSERT INTO dbo.Calls SELECT * fROM dbo.Calls\n
\nfor selected columns use:
\n INSERT INTO dbo.Calls () SELECT FROM dbo.Calls\n
\n
soup wrap:
Because Primary keys must contain unique value and cannot contain NULL values. so use following queries if your table don't have primary key.
for all columns use:
INSERT INTO dbo.Calls SELECT * fROM dbo.Calls
for selected columns use:
INSERT INTO dbo.Calls () SELECT FROM dbo.Calls
qid & accept id:
(16186786, 16186806)
query:
MySQL compare same values in two column
soup:
SELECT jamu_a,\n jamu_b,\n GROUP_CONCAT(khasiat) khasiat,\n COUNT(*) total\nFROM TableName\nGROUP BY jamu_a, jamu_b\n
\n\nOUTPUT
\n╔════════╦════════╦═════════╦═══════╗\n║ JAMU_A ║ JAMU_B ║ KHASIAT ║ TOTAL ║\n╠════════╬════════╬═════════╬═══════╣\n║ A ║ B ║ Z,X,C ║ 3 ║\n╚════════╩════════╩═════════╩═══════╝\n
\nif there are repeating values on column KHASIAT and you want it to be unique, you can add DISTINCT on GROUP_CONCAT()
\nSELECT jamu_a,\n jamu_b,\n GROUP_CONCAT(DISTINCT khasiat) khasiat,\n COUNT(*) total\nFROM TableName\nGROUP BY jamu_a, jamu_b\n
\n
soup wrap:
SELECT jamu_a,
jamu_b,
GROUP_CONCAT(khasiat) khasiat,
COUNT(*) total
FROM TableName
GROUP BY jamu_a, jamu_b
OUTPUT
╔════════╦════════╦═════════╦═══════╗
║ JAMU_A ║ JAMU_B ║ KHASIAT ║ TOTAL ║
╠════════╬════════╬═════════╬═══════╣
║ A ║ B ║ Z,X,C ║ 3 ║
╚════════╩════════╩═════════╩═══════╝
if there are repeating values on column KHASIAT and you want it to be unique, you can add DISTINCT on GROUP_CONCAT()
SELECT jamu_a,
jamu_b,
GROUP_CONCAT(DISTINCT khasiat) khasiat,
COUNT(*) total
FROM TableName
GROUP BY jamu_a, jamu_b
qid & accept id:
(16212126, 16212506)
query:
Updating a Dataset to add caclulated fields
soup:
You could use something like this to get the results for each jockey in one row:
\nSELECT jockey.jockey_skey,\n TotalRaces = COUNT(*),\n [1sts] = COUNT(CASE WHEN raceresults.place = '01' THEN 1 END),\n [2nds] = COUNT(CASE WHEN raceresults.place = '02' THEN 1 END),\n [3rds] = COUNT(CASE WHEN raceresults.place = '03' THEN 1 END),\n [4ths] = COUNT(CASE WHEN raceresults.place = '04' THEN 1 END),\n [5ths] = COUNT(CASE WHEN raceresults.place = '05' THEN 1 END),\n [6ths] = COUNT(CASE WHEN raceresults.place = '06' THEN 1 END),\n [7ths] = COUNT(CASE WHEN raceresults.place = '07' THEN 1 END),\n [8ths] = COUNT(CASE WHEN raceresults.place = '08' THEN 1 END),\n -- etc\n [NonRunner] = COUNT(CASE WHEN raceresults.place = 'NR' THEN 1 END),\n [Fell] = COUNT(CASE WHEN raceresults.place = 'F' THEN 1 END),\n [PulledUp] = COUNT(CASE WHEN raceresults.place = 'PU' THEN 1 END),\n [Unseated] = COUNT(CASE WHEN raceresults.place = 'U' THEN 1 END),\n [Refused] = COUNT(CASE WHEN raceresults.place = 'R' THEN 1 END),\n [BroughtDown] = COUNT(CASE WHEN raceresults.place = 'B' THEN 1 END)\nFROM jockey \n INNER JOIN runnersandriders \n ON jockey.jockey_skey = runnersandriders.jockey_skey \n INNER JOIN horse \n ON runnersandriders.horse_skey = horse.horse_skey \n INNER JOIN raceresults \n ON horse.horse_skey = raceresults.horse_skey \nGROUP BY jockey.jockey_skey\nORDER BY jockey.jockey_skey \n
\nSimplified Example on SQL Fiddle
\nALternatively you could use WITH ROLLUP to get an additional row with totals:
\nSELECT jockey.jockey_skey,\n raceresults.place,\n [CountOfResult] = COUNT(*)\nFROM jockey \n INNER JOIN runnersandriders \n ON jockey.jockey_skey = runnersandriders.jockey_skey \n INNER JOIN horse \n ON runnersandriders.horse_skey = horse.horse_skey \n INNER JOIN raceresults \n ON horse.horse_skey = raceresults.horse_skey \nGROUP BY jockey.jockey_skey, raceresults.place\nWITH ROLLUP\nORDER BY jockey.jockey_skey, raceresults.place;\n
\nWhere NULL values represent totals
\nSimplified Example on SQL Fiddle
\n
soup wrap:
You could use something like this to get the results for each jockey in one row:
SELECT jockey.jockey_skey,
TotalRaces = COUNT(*),
[1sts] = COUNT(CASE WHEN raceresults.place = '01' THEN 1 END),
[2nds] = COUNT(CASE WHEN raceresults.place = '02' THEN 1 END),
[3rds] = COUNT(CASE WHEN raceresults.place = '03' THEN 1 END),
[4ths] = COUNT(CASE WHEN raceresults.place = '04' THEN 1 END),
[5ths] = COUNT(CASE WHEN raceresults.place = '05' THEN 1 END),
[6ths] = COUNT(CASE WHEN raceresults.place = '06' THEN 1 END),
[7ths] = COUNT(CASE WHEN raceresults.place = '07' THEN 1 END),
[8ths] = COUNT(CASE WHEN raceresults.place = '08' THEN 1 END),
-- etc
[NonRunner] = COUNT(CASE WHEN raceresults.place = 'NR' THEN 1 END),
[Fell] = COUNT(CASE WHEN raceresults.place = 'F' THEN 1 END),
[PulledUp] = COUNT(CASE WHEN raceresults.place = 'PU' THEN 1 END),
[Unseated] = COUNT(CASE WHEN raceresults.place = 'U' THEN 1 END),
[Refused] = COUNT(CASE WHEN raceresults.place = 'R' THEN 1 END),
[BroughtDown] = COUNT(CASE WHEN raceresults.place = 'B' THEN 1 END)
FROM jockey
INNER JOIN runnersandriders
ON jockey.jockey_skey = runnersandriders.jockey_skey
INNER JOIN horse
ON runnersandriders.horse_skey = horse.horse_skey
INNER JOIN raceresults
ON horse.horse_skey = raceresults.horse_skey
GROUP BY jockey.jockey_skey
ORDER BY jockey.jockey_skey
Simplified Example on SQL Fiddle
ALternatively you could use WITH ROLLUP to get an additional row with totals:
SELECT jockey.jockey_skey,
raceresults.place,
[CountOfResult] = COUNT(*)
FROM jockey
INNER JOIN runnersandriders
ON jockey.jockey_skey = runnersandriders.jockey_skey
INNER JOIN horse
ON runnersandriders.horse_skey = horse.horse_skey
INNER JOIN raceresults
ON horse.horse_skey = raceresults.horse_skey
GROUP BY jockey.jockey_skey, raceresults.place
WITH ROLLUP
ORDER BY jockey.jockey_skey, raceresults.place;
Where NULL values represent totals
Simplified Example on SQL Fiddle
qid & accept id:
(16216129, 16216258)
query:
Using a current row value into a subquery
soup:
Based on your description, this may be the query that you want:
\nselect person, AVG(OrderTotal), COUNT(distinct orderId)\nfrom (select Customer_id as person, Order_id, SUM(total) as OrderTotal\n from Orders\n group by Customer_Id, Order_Id\n ) o\ngroup by person \n
\nI say "may" because I would expect OrderId to be a unique key in the Orders table. So, the inner subquery wouldn't be doing anything. Perhaps you mean something like OrderLines in the inner query.
\nThe reason your query fails is because of the correlation statement:
\nwhere Customer_Id = person\n
\nYou intend for this to use the value from the outer query ("person") to relate to the inner one ("Customer_Id"). However, the inner query does not know the alias in the select clause of the outer one. So, "Person" is undefined. When doing correlated subqueries, you should always use table aliases. That query should look more like:
\n(select COUNT(o2.Order_Id) as timesSeen \n from Orders o2 where o2.Customer_Id=o.person \n group by o2.Order_Id\n)\n
\nAssuming "o" is the alias for orders in the outer query. Correlated subqueries are not needed. You should just simplify the query.
\n
soup wrap:
Based on your description, this may be the query that you want:
select person, AVG(OrderTotal), COUNT(distinct orderId)
from (select Customer_id as person, Order_id, SUM(total) as OrderTotal
from Orders
group by Customer_Id, Order_Id
) o
group by person
I say "may" because I would expect OrderId to be a unique key in the Orders table. So, the inner subquery wouldn't be doing anything. Perhaps you mean something like OrderLines in the inner query.
The reason your query fails is because of the correlation statement:
where Customer_Id = person
You intend for this to use the value from the outer query ("person") to relate to the inner one ("Customer_Id"). However, the inner query does not know the alias in the select clause of the outer one. So, "Person" is undefined. When doing correlated subqueries, you should always use table aliases. That query should look more like:
(select COUNT(o2.Order_Id) as timesSeen
from Orders o2 where o2.Customer_Id=o.person
group by o2.Order_Id
)
Assuming "o" is the alias for orders in the outer query. Correlated subqueries are not needed. You should just simplify the query.
qid & accept id:
(16223233, 16226292)
query:
Codeigniter - loop through post information passing value to model query and outputting result
soup:
You should use CodeIgniter's input class to get all post values.
\n$formValues = $this->input->post(NULL, TRUE);\n
\nThen in your controller set an intermediate value to hold your data.
\n$products = array();\n\nforeach($formValues as $key => $value) \n{\n $products[] = $this->sales_model->get_productdetails($key)\n}\n\n$data = array();\n$data["products"] = $products;\n
\nPass intermediary to the view.
\n$this->load->view('sales/new_autospread_order_lines', $data);\n
\nIn your review reference each hashed item in the $data array as a variable.
\n\n\n \n
\n\n
\n
soup wrap:
You should use CodeIgniter's input class to get all post values.
$formValues = $this->input->post(NULL, TRUE);
Then in your controller set an intermediate value to hold your data.
$products = array();
foreach($formValues as $key => $value)
{
$products[] = $this->sales_model->get_productdetails($key)
}
$data = array();
$data["products"] = $products;
Pass intermediary to the view.
$this->load->view('sales/new_autospread_order_lines', $data);
In your review reference each hashed item in the $data array as a variable.
qid & accept id:
(16291075, 16291086)
query:
oracle duplicate rows based on a single column
soup:
SELECT a.*\nFROM TableName a\n INNER JOIN\n (\n SELECT EmpID\n FROM TableName\n GROUP BY EmpID\n HAVING COUNT(*) > 1\n ) b ON a.EmpID = b.EmpID\n
\n\n- SQLFiddle Demo
\n
\nAnother way, although I prefer above, is to use IN
\nSELECT a.*\nFROM TableName a\nWHERE EmpId IN\n (\n SELECT EmpId\n FROM TableName\n GROUP BY EmpId\n HAVING COUNT(*) > 1\n ) \n
\n\n- SQLFiddle Demo
\n
\n
soup wrap:
SELECT a.*
FROM TableName a
INNER JOIN
(
SELECT EmpID
FROM TableName
GROUP BY EmpID
HAVING COUNT(*) > 1
) b ON a.EmpID = b.EmpID
Another way, although I prefer above, is to use IN
SELECT a.*
FROM TableName a
WHERE EmpId IN
(
SELECT EmpId
FROM TableName
GROUP BY EmpId
HAVING COUNT(*) > 1
)
qid & accept id:
(16330159, 16330179)
query:
Interview : update table values using select statement
soup:
Try this..
\nUpdate TableName Set Gender=Case when Gender='M' Then 'F' Else 'M' end\n
\nOn OP request..update using Select...
\nUpdate TableName T Set Gender=(\nSelect Gender from TableName B where T.Gender!=B.Gender and rownum=1);\n
\n\n
soup wrap:
Try this..
Update TableName Set Gender=Case when Gender='M' Then 'F' Else 'M' end
On OP request..update using Select...
Update TableName T Set Gender=(
Select Gender from TableName B where T.Gender!=B.Gender and rownum=1);
qid & accept id:
(16335925, 16336267)
query:
difference in days, between two recordings
soup:
Please try:
\n;with T as(\n select *, ROW_NUMBER() over (order by User, Days) Rnum from YourTable\n)\nselect \n distinct a.User, \n b.Days-a.Days difference_in_day \nfrom T a left join T b on a.Rnum=b.Rnum-1 \nwhere b.User is not null\n
\nSample
\ndeclare @tbl as table(xUser nvarchar(1), xDays int)\ninsert into @tbl values \n('A', 1),\n('A', 1),\n('A', 2),\n('B', 2),\n('B', 5)\n\nselect *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl\n\n;with T as(\n select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl\n)\nselect \n distinct a.xUser, \n b.xDays-a.xDays difference_in_day \nfrom T a left join T b on a.Rnum=b.Rnum-1 \nwhere b.xUser is not null\n
\n
soup wrap:
Please try:
;with T as(
select *, ROW_NUMBER() over (order by User, Days) Rnum from YourTable
)
select
distinct a.User,
b.Days-a.Days difference_in_day
from T a left join T b on a.Rnum=b.Rnum-1
where b.User is not null
Sample
declare @tbl as table(xUser nvarchar(1), xDays int)
insert into @tbl values
('A', 1),
('A', 1),
('A', 2),
('B', 2),
('B', 5)
select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl
;with T as(
select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl
)
select
distinct a.xUser,
b.xDays-a.xDays difference_in_day
from T a left join T b on a.Rnum=b.Rnum-1
where b.xUser is not null
qid & accept id:
(16372169, 16374519)
query:
return value of stored procedure based on different rules
soup:
The key is to create columns for each of your criteria, i.e. one column for if the next door flat owner has the same nationality, a column for if the floor is empty.
\nYou can then take all your criteria and place them within the order by of a ROW_NUMBER() function to get the flats in the order you defined. The key part in the below query is this:
\nRowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n NextIsNationalityMatch DESC, \n EmptyFloor DESC, \n EmptyFlatsEitherSide DESC,\n Floor, \n FlatNo)\n
\nThe four columns (PrevIsNationalityMatch, NextIsNationalityMatch, EmptyFloor', 'EmptyFlatsEitherSide), are all bit fields, so if a row exists where the previous flat is owned by someone of the same nationality this will always be ranked one by the ROW_NUMBER function, otherwise it looks for if the next flat is owned by someone of the same nationality (I added this rule as it seemed logical but it could easily be removed by removing it from the order by), and so on and so on until it is left just sorting by floor and flat no.
\nDECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';\nWITH FlatOwnerNationality AS\n( SELECT FlatMaster.Floor, \n FlatMaster.FlatNo, \n FlatMaster.IsOccupied,\n IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END\n FROM FlatMaster\n LEFT JOIN OwnerMaster\n ON OwnerMaster.OwnerName = FlatMaster.OwnerName\n), Flats AS\n( SELECT FlatMaster.Floor,\n FlatMaster.FlatNo,\n FlatMaster.IsOccupied,\n EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied = 'NO' THEN 1 ELSE 0 END,\n EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,\n PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),\n NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)\n FROM FlatMaster\n LEFT JOIN FlatOwnerNationality PrevFlat\n ON PrevFlat.Floor = FlatMaster.Floor\n AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1\n LEFT JOIN FlatOwnerNationality NextFlat\n ON NextFlat.Floor = FlatMaster.Floor\n AND NextFlat.FlatNo = FlatMaster.FlatNo + 1\n), RankedFlats AS\n( SELECT *,\n RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n NextIsNationalityMatch DESC, \n EmptyFloor DESC, \n EmptyFlatsEitherSide DESC,\n Floor, \n FlatNo)\n FROM Flats\n WHERE IsOccupied = 'NO'\n)\nSELECT Floor,\n FlatNo,\n MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'\n WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'\n WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'\n WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'\n ELSE 'First Available Flat'\n END\nFROM RankedFlats\nWHERE RowNumber = 1;\n
\nBrazil Example - Floor 1, Flat 4
\nEngland Example - Floor 1, Flat 2
\nSpain Example - Floor 2, Flat 1
\nEDIT
\nDECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';\n\nWITH FlatOwnerNationality AS\n( SELECT FlatMaster.Floor, \n FlatMaster.FlatNo, \n FlatMaster.IsOccupied,\n IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END\n FROM FlatMaster\n LEFT JOIN OwnerMaster\n ON OwnerMaster.OwnerName = FlatMaster.OwnerName\n), Flats AS\n( SELECT FlatMaster.Floor,\n FlatMaster.FlatNo,\n FlatMaster.IsOccupied,\n EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied = 'NO' AND PrevFlat2.IsOccupied = 'NO' AND NextFlat2.IsOccupied = 'NO' THEN 1 ELSE 0 END,\n EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,\n PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),\n NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)\n FROM FlatMaster\n LEFT JOIN FlatOwnerNationality PrevFlat\n ON PrevFlat.Floor = FlatMaster.Floor\n AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1\n LEFT JOIN FlatOwnerNationality NextFlat\n ON NextFlat.Floor = FlatMaster.Floor\n AND NextFlat.FlatNo = FlatMaster.FlatNo + 1\n LEFT JOIN FlatMaster PrevFlat2\n ON PrevFlat2.Floor = FlatMaster.Floor\n AND PrevFlat2.FlatNo = FlatMaster.FlatNo - 2\n LEFT JOIN FlatMaster NextFlat2\n ON NextFlat2.Floor = FlatMaster.Floor\n AND NextFlat2.FlatNo = FlatMaster.FlatNo + 2\n\n), RankedFlats AS\n( SELECT *,\n RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n NextIsNationalityMatch DESC, \n EmptyFloor DESC, \n EmptyFlatsEitherSide DESC,\n Floor, \n FlatNo)\n FROM Flats\n WHERE IsOccupied = 'NO'\n)\nSELECT Floor,\n FlatNo,\n MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'\n WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'\n WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'\n WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'\n ELSE 'First Available Flat'\n END\nFROM RankedFlats\nWHERE RowNumber = 1;\n
\n
soup wrap:
The key is to create columns for each of your criteria, i.e. one column for if the next door flat owner has the same nationality, a column for if the floor is empty.
You can then take all your criteria and place them within the order by of a ROW_NUMBER() function to get the flats in the order you defined. The key part in the below query is this:
RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC,
NextIsNationalityMatch DESC,
EmptyFloor DESC,
EmptyFlatsEitherSide DESC,
Floor,
FlatNo)
The four columns (PrevIsNationalityMatch, NextIsNationalityMatch, EmptyFloor', 'EmptyFlatsEitherSide), are all bit fields, so if a row exists where the previous flat is owned by someone of the same nationality this will always be ranked one by the ROW_NUMBER function, otherwise it looks for if the next flat is owned by someone of the same nationality (I added this rule as it seemed logical but it could easily be removed by removing it from the order by), and so on and so on until it is left just sorting by floor and flat no.
DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';
WITH FlatOwnerNationality AS
( SELECT FlatMaster.Floor,
FlatMaster.FlatNo,
FlatMaster.IsOccupied,
IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END
FROM FlatMaster
LEFT JOIN OwnerMaster
ON OwnerMaster.OwnerName = FlatMaster.OwnerName
), Flats AS
( SELECT FlatMaster.Floor,
FlatMaster.FlatNo,
FlatMaster.IsOccupied,
EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied = 'NO' THEN 1 ELSE 0 END,
EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,
PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),
NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)
FROM FlatMaster
LEFT JOIN FlatOwnerNationality PrevFlat
ON PrevFlat.Floor = FlatMaster.Floor
AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1
LEFT JOIN FlatOwnerNationality NextFlat
ON NextFlat.Floor = FlatMaster.Floor
AND NextFlat.FlatNo = FlatMaster.FlatNo + 1
), RankedFlats AS
( SELECT *,
RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC,
NextIsNationalityMatch DESC,
EmptyFloor DESC,
EmptyFlatsEitherSide DESC,
Floor,
FlatNo)
FROM Flats
WHERE IsOccupied = 'NO'
)
SELECT Floor,
FlatNo,
MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'
WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'
WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'
WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'
ELSE 'First Available Flat'
END
FROM RankedFlats
WHERE RowNumber = 1;
Brazil Example - Floor 1, Flat 4
England Example - Floor 1, Flat 2
Spain Example - Floor 2, Flat 1
EDIT
DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';
WITH FlatOwnerNationality AS
( SELECT FlatMaster.Floor,
FlatMaster.FlatNo,
FlatMaster.IsOccupied,
IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END
FROM FlatMaster
LEFT JOIN OwnerMaster
ON OwnerMaster.OwnerName = FlatMaster.OwnerName
), Flats AS
( SELECT FlatMaster.Floor,
FlatMaster.FlatNo,
FlatMaster.IsOccupied,
EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied = 'NO' AND PrevFlat2.IsOccupied = 'NO' AND NextFlat2.IsOccupied = 'NO' THEN 1 ELSE 0 END,
EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,
PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),
NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)
FROM FlatMaster
LEFT JOIN FlatOwnerNationality PrevFlat
ON PrevFlat.Floor = FlatMaster.Floor
AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1
LEFT JOIN FlatOwnerNationality NextFlat
ON NextFlat.Floor = FlatMaster.Floor
AND NextFlat.FlatNo = FlatMaster.FlatNo + 1
LEFT JOIN FlatMaster PrevFlat2
ON PrevFlat2.Floor = FlatMaster.Floor
AND PrevFlat2.FlatNo = FlatMaster.FlatNo - 2
LEFT JOIN FlatMaster NextFlat2
ON NextFlat2.Floor = FlatMaster.Floor
AND NextFlat2.FlatNo = FlatMaster.FlatNo + 2
), RankedFlats AS
( SELECT *,
RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC,
NextIsNationalityMatch DESC,
EmptyFloor DESC,
EmptyFlatsEitherSide DESC,
Floor,
FlatNo)
FROM Flats
WHERE IsOccupied = 'NO'
)
SELECT Floor,
FlatNo,
MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'
WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'
WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'
WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'
ELSE 'First Available Flat'
END
FROM RankedFlats
WHERE RowNumber = 1;
qid & accept id:
(16426039, 16427224)
query:
Stored procedure for getting sum of entries in table for each ID
soup:
Logically, you are grouping by two criteria, scale and skill name. However, if I understand it correctly, every row is supposed to represent a single skill name. Therefore, you should group by tblSkill.Name only. To get different counts for different scales in separate columns, you can use conditional aggregation, i.e. aggregation on an expression that (usually) involves a CASE construct. Here's how you could go about it:
\nSELECT \n tblSkill.Name AS skillname,\n COUNT(CASE tblSkillMetrics.Scale WHEN 1 THEN EmployeeID END) AS NotAplicable,\n COUNT(CASE tblSkillMetrics.Scale WHEN 2 THEN EmployeeID END) AS Beginner,\n COUNT(CASE tblSkillMetrics.Scale WHEN 3 THEN EmployeeID END) AS Proficient,\n COUNT(CASE tblSkillMetrics.Scale WHEN 4 THEN EmployeeID END) AS Expert\nFROM\n tblSkill\nINNER JOIN \n tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID\nGROUP BY \n tblSkill.Name \nORDER BY \n skillname DESC\n;\n
\nNote that there's a special syntax for this kind of queries. It employs the PIVOT keyword, as what you get is essentially a grouped result set pivoted on one of the grouping criteria, scale in this case. This is how the same could be achieved with PIVOT:
\nSELECT\n skillname,\n [1] AS NotAplicable,\n [2] AS Beginner,\n [3] AS Proficient,\n [4] AS Expert\nFROM (\n SELECT \n tblSkill.Name AS skillname,\n tblSkillMetrics.Scale,\n EmployeeID\n FROM\n tblSkill\n INNER JOIN \n tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID\n) s\nPIVOT (\n COUNT(EmployeeID) FOR Scale IN ([1], [2], [3], [4])\n) p\n;\n
\nBasically, PIVOT implies grouping. All columns but one in the source dataset are grouping criteria, namely every one of them that is not used as an argument of an aggregate function in the PIVOT clause is a grouping criterion. One of them is also assigned to be the one the results are pivoted on. (Again, in this case it is scale.)
\nBecause grouping is implicit, a derived table is used to avoid grouping by more criteria than necessary. Values of Scale become names of new columns that the PIVOT clause produces. (That is why they are delimited with square brackets when listed in PIVOT: they are not IDs in that context but identifiers delimited as required by Transact-SQL syntax.)
\n
soup wrap:
Logically, you are grouping by two criteria, scale and skill name. However, if I understand it correctly, every row is supposed to represent a single skill name. Therefore, you should group by tblSkill.Name only. To get different counts for different scales in separate columns, you can use conditional aggregation, i.e. aggregation on an expression that (usually) involves a CASE construct. Here's how you could go about it:
SELECT
tblSkill.Name AS skillname,
COUNT(CASE tblSkillMetrics.Scale WHEN 1 THEN EmployeeID END) AS NotAplicable,
COUNT(CASE tblSkillMetrics.Scale WHEN 2 THEN EmployeeID END) AS Beginner,
COUNT(CASE tblSkillMetrics.Scale WHEN 3 THEN EmployeeID END) AS Proficient,
COUNT(CASE tblSkillMetrics.Scale WHEN 4 THEN EmployeeID END) AS Expert
FROM
tblSkill
INNER JOIN
tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID
GROUP BY
tblSkill.Name
ORDER BY
skillname DESC
;
Note that there's a special syntax for this kind of queries. It employs the PIVOT keyword, as what you get is essentially a grouped result set pivoted on one of the grouping criteria, scale in this case. This is how the same could be achieved with PIVOT:
SELECT
skillname,
[1] AS NotAplicable,
[2] AS Beginner,
[3] AS Proficient,
[4] AS Expert
FROM (
SELECT
tblSkill.Name AS skillname,
tblSkillMetrics.Scale,
EmployeeID
FROM
tblSkill
INNER JOIN
tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID
) s
PIVOT (
COUNT(EmployeeID) FOR Scale IN ([1], [2], [3], [4])
) p
;
Basically, PIVOT implies grouping. All columns but one in the source dataset are grouping criteria, namely every one of them that is not used as an argument of an aggregate function in the PIVOT clause is a grouping criterion. One of them is also assigned to be the one the results are pivoted on. (Again, in this case it is scale.)
Because grouping is implicit, a derived table is used to avoid grouping by more criteria than necessary. Values of Scale become names of new columns that the PIVOT clause produces. (That is why they are delimited with square brackets when listed in PIVOT: they are not IDs in that context but identifiers delimited as required by Transact-SQL syntax.)
qid & accept id:
(16426094, 16426214)
query:
Query Data From Two Tables + One Table Must Only Query Using Most Recent Data
soup:
You can simply add another JOIN to the existing query that you have. And it's a lot cleaner when you use an explicit (INNER) JOIN matching keys in the ON clause, compared with an inferred CROSS JOIN (using comma separated tables) that are filtered in the WHERE clause:
\nSELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name\nFROM AVLVehiclePosition p\nJOIN Vehicles v\n ON p.VehicleKey = v.VehicleKey\nJOIN (SELECT max(Timestamp) as maxtime, VehicleKEy\n FROM AVLVehiclePosition\n GROUP BY VehicleKey) maxresults \n ON p.VehicleKey = maxresults.VehicleKEy \n AND p.Timestamp = maxresults.maxtime\n
\nAnd you can make this even cleaner if you make use of ROW_NUMBER():
\nWITH maxResults AS (\n SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name,\n ROW_NUMBER() OVER (PARTITION BY p.VehicleKey ORDER BY p.Timestamp DESC) rowNum\n FROM AVLVehiclePosition p\n JOIN Vehicles v\n ON p.VehicleKey = v.VehicleKey)\nSELECT * FROM maxResults\nWHERE rowNum = 1\n
\n
soup wrap:
You can simply add another JOIN to the existing query that you have. And it's a lot cleaner when you use an explicit (INNER) JOIN matching keys in the ON clause, compared with an inferred CROSS JOIN (using comma separated tables) that are filtered in the WHERE clause:
SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name
FROM AVLVehiclePosition p
JOIN Vehicles v
ON p.VehicleKey = v.VehicleKey
JOIN (SELECT max(Timestamp) as maxtime, VehicleKEy
FROM AVLVehiclePosition
GROUP BY VehicleKey) maxresults
ON p.VehicleKey = maxresults.VehicleKEy
AND p.Timestamp = maxresults.maxtime
And you can make this even cleaner if you make use of ROW_NUMBER():
WITH maxResults AS (
SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name,
ROW_NUMBER() OVER (PARTITION BY p.VehicleKey ORDER BY p.Timestamp DESC) rowNum
FROM AVLVehiclePosition p
JOIN Vehicles v
ON p.VehicleKey = v.VehicleKey)
SELECT * FROM maxResults
WHERE rowNum = 1
qid & accept id:
(16442686, 16442782)
query:
SQL Count of columns result for all existing Dates in the table
soup:
If you want the sum of all dates, just remove the where clause:
\nselect DTTransaction.machinename, count(DTTransaction.machinename)\nfrom DTTransaction join\n DTHotelReservation\n on DTTransaction.TransactionID = DTHotelReservation.TransactionID and\n DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)\ngroup by DTTransaction.machinename\n
\nIf you want the results by date, then include that in your group by. For instance,
\nselect DTTransaction.machinename, convert(varchar(10),BookedOn,101), count(DTTransaction.machinename)\nfrom DTTransaction join\n DTHotelReservation\n on DTTransaction.TransactionID = DTHotelReservation.TransactionID and\n DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)\ngroup by DTTransaction.machinename, convert(varchar(10),BookedOn,101)\norder by 1, MAX(BookedOn)\n
\nI included an order by clause, so the results will be in order by date within each machine name.
\n
soup wrap:
If you want the sum of all dates, just remove the where clause:
select DTTransaction.machinename, count(DTTransaction.machinename)
from DTTransaction join
DTHotelReservation
on DTTransaction.TransactionID = DTHotelReservation.TransactionID and
DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)
group by DTTransaction.machinename
If you want the results by date, then include that in your group by. For instance,
select DTTransaction.machinename, convert(varchar(10),BookedOn,101), count(DTTransaction.machinename)
from DTTransaction join
DTHotelReservation
on DTTransaction.TransactionID = DTHotelReservation.TransactionID and
DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)
group by DTTransaction.machinename, convert(varchar(10),BookedOn,101)
order by 1, MAX(BookedOn)
I included an order by clause, so the results will be in order by date within each machine name.
qid & accept id:
(16487093, 16488203)
query:
SQL Full outer join or alternative solution
soup:
(assuming the OP wants a fully symmetric outer 4-join)
\nWITH four AS (\n SELECT id, event_dt FROM t1\n UNION\n SELECT id, event_dt FROM t2\n UNION\n SELECT id, event_dt FROM t3\n UNION\n SELECT id, event_dt FROM t4\n )\nSELECT f.id, f.event_dt\n , t1.amt1\n , t2.amt2\n , t3.amt3\n , t4.amt4\nFROM four f\nLEFT JOIN t1 ON t1.id = f.id AND t1.event_dt = f.event_dt\nLEFT JOIN t2 ON t2.id = f.id AND t2.event_dt = f.event_dt\nLEFT JOIN t3 ON t3.id = f.id AND t3.event_dt = f.event_dt\nLEFT JOIN t4 ON t4.id = f.id AND t4.event_dt = f.event_dt\nORDER BY id, event_dt\n ;\n
\nResult:
\n id | event_dt | amt1 | amt2 | amt3 | amt4 \n----+------------+------+------+------+------\n 1 | 2012-04-01 | 1 | | | \n 1 | 2012-04-02 | 1 | | 3 | \n 1 | 2012-04-03 | 1 | | 3 | \n 1 | 2012-04-06 | | 2 | 3 | 4\n 1 | 2012-04-07 | | 2 | | \n 2 | 2012-04-01 | 40 | | | \n 2 | 2012-04-02 | | | 3 | \n 2 | 2012-04-03 | | | 3 | \n 2 | 2012-04-04 | 40 | | | \n(9 rows)\n
\nBTW: after the UNION four, LEFT JOINs will do the same as FULL JOINs here (union four already has all the possible {id, event_dt} pairs)
\n
soup wrap:
(assuming the OP wants a fully symmetric outer 4-join)
WITH four AS (
SELECT id, event_dt FROM t1
UNION
SELECT id, event_dt FROM t2
UNION
SELECT id, event_dt FROM t3
UNION
SELECT id, event_dt FROM t4
)
SELECT f.id, f.event_dt
, t1.amt1
, t2.amt2
, t3.amt3
, t4.amt4
FROM four f
LEFT JOIN t1 ON t1.id = f.id AND t1.event_dt = f.event_dt
LEFT JOIN t2 ON t2.id = f.id AND t2.event_dt = f.event_dt
LEFT JOIN t3 ON t3.id = f.id AND t3.event_dt = f.event_dt
LEFT JOIN t4 ON t4.id = f.id AND t4.event_dt = f.event_dt
ORDER BY id, event_dt
;
Result:
id | event_dt | amt1 | amt2 | amt3 | amt4
----+------------+------+------+------+------
1 | 2012-04-01 | 1 | | |
1 | 2012-04-02 | 1 | | 3 |
1 | 2012-04-03 | 1 | | 3 |
1 | 2012-04-06 | | 2 | 3 | 4
1 | 2012-04-07 | | 2 | |
2 | 2012-04-01 | 40 | | |
2 | 2012-04-02 | | | 3 |
2 | 2012-04-03 | | | 3 |
2 | 2012-04-04 | 40 | | |
(9 rows)
BTW: after the UNION four, LEFT JOINs will do the same as FULL JOINs here (union four already has all the possible {id, event_dt} pairs)
qid & accept id:
(16490625, 16490738)
query:
SQL Server 2012: JOIN 3 tables for a condition
soup:
You can do this with a rather inefficient, nested query structure in an update clause.
\nIn SQL Server syntax:
\nupdate tableC\n set Name = (select top 1 b.name\n from TableB b \n where b.name not in (select name from TableA a where a.id = TableC.id)\n order by NEWID()\n )\n
\nThe inner most select from TableA gets all the names from the same id. The where clause chooses names that are not in this list. The order by () limit 1 randomly selects one of the names.
\nHere is an example of the code that works, according to my understanding of the problem:
\ndeclare @tableA table (id int, name varchar(2));\ndeclare @tableB table (name varchar(2));\ndeclare @tableC table (id int, name varchar(2))\n\ninsert into @tableA(id, name)\n select 01, 'A4' union all\n select 01, 'SH' union all\n select 01, '9K' union all\n select 02, 'M1' union all\n select 02, 'L4' union all\n select 03, '2G' union all\n select 03, '99';\n\ninsert into @tableB(name)\n select '5G' union all\n select 'U8' union all\n select '02' union all\n select '45' union all\n select '23' union all\n select 'J7' union all\n select '99' union all\n select '9F' union all\n select 'A4' union all\n select 'H2';\n\n\ninsert into @tableC(id)\n select 01 union all\n select 01 union all\n select 01 union all\n select 02 union all\n select 02 union all\n select 03 union all\n select 03;\n\n/* \nselect * from @tableA;\nselect * from @tableB;\nselect * from @tableC;\n */\n\nupdate c\n set Name = (select top 1 b.name\n from @TableB b \n where b.name not in (select name from @TableA a where a.id = c.id)\n order by NEWID()\n )\nfrom @tableC c\n\nselect *\nfrom @tableC\n
\n
soup wrap:
You can do this with a rather inefficient, nested query structure in an update clause.
In SQL Server syntax:
update tableC
set Name = (select top 1 b.name
from TableB b
where b.name not in (select name from TableA a where a.id = TableC.id)
order by NEWID()
)
The inner most select from TableA gets all the names from the same id. The where clause chooses names that are not in this list. The order by () limit 1 randomly selects one of the names.
Here is an example of the code that works, according to my understanding of the problem:
declare @tableA table (id int, name varchar(2));
declare @tableB table (name varchar(2));
declare @tableC table (id int, name varchar(2))
insert into @tableA(id, name)
select 01, 'A4' union all
select 01, 'SH' union all
select 01, '9K' union all
select 02, 'M1' union all
select 02, 'L4' union all
select 03, '2G' union all
select 03, '99';
insert into @tableB(name)
select '5G' union all
select 'U8' union all
select '02' union all
select '45' union all
select '23' union all
select 'J7' union all
select '99' union all
select '9F' union all
select 'A4' union all
select 'H2';
insert into @tableC(id)
select 01 union all
select 01 union all
select 01 union all
select 02 union all
select 02 union all
select 03 union all
select 03;
/*
select * from @tableA;
select * from @tableB;
select * from @tableC;
*/
update c
set Name = (select top 1 b.name
from @TableB b
where b.name not in (select name from @TableA a where a.id = c.id)
order by NEWID()
)
from @tableC c
select *
from @tableC
qid & accept id:
(16507239, 16508385)
query:
join comma delimited data column
soup:
Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list.
\nOnce you have this data normalized then you can easily query the data. The new table structure could be similar to this:
\nCREATE TABLE T1\n(\n [col1] varchar(2), \n [col2] varchar(5),\n constraint pk1_t1 primary key (col1)\n);\n\nINSERT INTO T1\n ([col1], [col2])\nVALUES\n ('C1', 'john'),\n ('C2', 'alex'),\n ('C3', 'piers'),\n ('C4', 'sara')\n;\n\nCREATE TABLE T2\n(\n [col1] varchar(2), \n [col2] varchar(2),\n constraint pk1_t2 primary key (col1, col2),\n constraint fk1_col2 foreign key (col2) references t1 (col1)\n);\n\nINSERT INTO T2\n ([col1], [col2])\nVALUES\n ('R1', 'C1'),\n ('R1', 'C2'),\n ('R1', 'C4'),\n ('R2', 'C3'),\n ('R2', 'C4'),\n ('R3', 'C1'),\n ('R3', 'C4')\n;\n
\nNormalizing the tables would make it much easier for you to query the data by joining the tables:
\nselect t2.col1, t1.col2\nfrom t2\ninner join t1\n on t2.col2 = t1.col1\n
\nSee Demo
\nThen if you wanted to display the data as a comma-separated list, you could use FOR XML PATH and STUFF:
\nselect distinct t2.col1, \n STUFF(\n (SELECT distinct ', ' + t1.col2\n FROM t1\n inner join t2 t\n on t1.col1 = t.col2\n where t2.col1 = t.col1\n FOR XML PATH ('')), 1, 1, '') col2\nfrom t2;\n
\nSee Demo.
\nIf you are not able to normalize the data, then there are several things that you can do.
\nFirst, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this:
\nCREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1)) \nreturns @temptable TABLE (items varchar(MAX)) \nas \nbegin \n declare @idx int \n declare @slice varchar(8000) \n\n select @idx = 1 \n if len(@String)<1 or @String is null return \n\n while @idx!= 0 \n begin \n set @idx = charindex(@Delimiter,@String) \n if @idx!=0 \n set @slice = left(@String,@idx - 1) \n else \n set @slice = @String \n\n if(len(@slice)>0) \n insert into @temptable(Items) values(@slice) \n\n set @String = right(@String,len(@String) - @idx) \n if len(@String) = 0 break \n end \nreturn \nend;\n
\nWhen you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list:
\n;with cte as\n(\n select c.col1, t1.col2\n from t1\n inner join \n (\n select t2.col1, i.items col2\n from t2\n cross apply dbo.split(t2.col2, ',') i\n ) c\n on t1.col1 = c.col2\n) \nselect distinct c.col1, \n STUFF(\n (SELECT distinct ', ' + c1.col2\n FROM cte c1\n where c.col1 = c1.col1\n FOR XML PATH ('')), 1, 1, '') col2\nfrom cte c\n
\nSee Demo.
\nA final way that you could get the result is by applying FOR XML PATH directly.
\nselect col1, \n(\n select ', '+t1.col2\n from t1\n where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%'\n for xml path(''), type\n).value('substring(text()[1], 3)', 'varchar(max)') as col2\nfrom t2;\n
\n\n
soup wrap:
Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list.
Once you have this data normalized then you can easily query the data. The new table structure could be similar to this:
CREATE TABLE T1
(
[col1] varchar(2),
[col2] varchar(5),
constraint pk1_t1 primary key (col1)
);
INSERT INTO T1
([col1], [col2])
VALUES
('C1', 'john'),
('C2', 'alex'),
('C3', 'piers'),
('C4', 'sara')
;
CREATE TABLE T2
(
[col1] varchar(2),
[col2] varchar(2),
constraint pk1_t2 primary key (col1, col2),
constraint fk1_col2 foreign key (col2) references t1 (col1)
);
INSERT INTO T2
([col1], [col2])
VALUES
('R1', 'C1'),
('R1', 'C2'),
('R1', 'C4'),
('R2', 'C3'),
('R2', 'C4'),
('R3', 'C1'),
('R3', 'C4')
;
Normalizing the tables would make it much easier for you to query the data by joining the tables:
select t2.col1, t1.col2
from t2
inner join t1
on t2.col2 = t1.col1
See Demo
Then if you wanted to display the data as a comma-separated list, you could use FOR XML PATH and STUFF:
select distinct t2.col1,
STUFF(
(SELECT distinct ', ' + t1.col2
FROM t1
inner join t2 t
on t1.col1 = t.col2
where t2.col1 = t.col1
FOR XML PATH ('')), 1, 1, '') col2
from t2;
See Demo.
If you are not able to normalize the data, then there are several things that you can do.
First, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this:
CREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1))
returns @temptable TABLE (items varchar(MAX))
as
begin
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end;
When you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list:
;with cte as
(
select c.col1, t1.col2
from t1
inner join
(
select t2.col1, i.items col2
from t2
cross apply dbo.split(t2.col2, ',') i
) c
on t1.col1 = c.col2
)
select distinct c.col1,
STUFF(
(SELECT distinct ', ' + c1.col2
FROM cte c1
where c.col1 = c1.col1
FOR XML PATH ('')), 1, 1, '') col2
from cte c
See Demo.
A final way that you could get the result is by applying FOR XML PATH directly.
select col1,
(
select ', '+t1.col2
from t1
where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%'
for xml path(''), type
).value('substring(text()[1], 3)', 'varchar(max)') as col2
from t2;
qid & accept id:
(16550767, 16550825)
query:
ORACLE Update with MINUS result
soup:
How about:
\nupdate table1\n set d = 'TEST'\n where (a,b,c) not in(select a,b,c from table2);\n
\nEdit:\nThe performance of minus generally suck, due to the sort operation. \nIf any of {a,b,c} are nullable, try the following instead:
\nupdate table1 t1\n set t1.d = 'TEST'\n where not exists(\n select 'x'\n from table2 t2\n where t2.a = t1.a\n and t2.b = t1.b\n and t2.c = t1.c\n );\n
\n
soup wrap:
How about:
update table1
set d = 'TEST'
where (a,b,c) not in(select a,b,c from table2);
Edit:
The performance of minus generally suck, due to the sort operation.
If any of {a,b,c} are nullable, try the following instead:
update table1 t1
set t1.d = 'TEST'
where not exists(
select 'x'
from table2 t2
where t2.a = t1.a
and t2.b = t1.b
and t2.c = t1.c
);
qid & accept id:
(16569297, 16569344)
query:
T-SQL How to build an aggregate table based on max values from a group?
soup:
SELECT account_code, product_id\nFROM (\n SELECT account_code, product_id, num_purchases,\n DENSE_RANK() OVER (PARTITION BY account_code \n ORDER BY num_purchases DESC) RowID\n FROM TableName\n )records\nWHERE RowID = 1\n
\n\n- SQLFiddle Demo
\n
\nOUTPUT
\n╔══════════════╦════════════╗\n║ ACCOUNT_CODE ║ PRODUCT_ID ║\n╠══════════════╬════════════╣\n║ abc123 ║ 1 ║\n║ xyz789 ║ 1 ║\n╚══════════════╩════════════╝\n
\n
soup wrap:
SELECT account_code, product_id
FROM (
SELECT account_code, product_id, num_purchases,
DENSE_RANK() OVER (PARTITION BY account_code
ORDER BY num_purchases DESC) RowID
FROM TableName
)records
WHERE RowID = 1
OUTPUT
╔══════════════╦════════════╗
║ ACCOUNT_CODE ║ PRODUCT_ID ║
╠══════════════╬════════════╣
║ abc123 ║ 1 ║
║ xyz789 ║ 1 ║
╚══════════════╩════════════╝
qid & accept id:
(16668803, 16668874)
query:
Sql - Fetch next value to replace variable value
soup:
Try this one -
\nQuery:
\nDECLARE \n @prime_schema SYSNAME = 'aaa'\n , @next_schema SYSNAME = 'bbb'\n\nDECLARE @SQL NVARCHAR(MAX)\nSELECT @SQL = (\n SELECT CHAR(13) + '\n SELECT * \n INTO [' + @next_schema + '].[' + o.name + ']\n FROM [' + s.name + '].[' + o.name + ']\n WHERE 1 != 1'\n FROM sys.objects o WITH (NOWAIT)\n JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]\n WHERE o.[type] = 'U'\n AND s.name = @prime_schema\n AND o.name IN ('table1', 'table2', 'table3')\n FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')\n\nPRINT @SQL\n
\nOutput:
\nSELECT * \nINTO [bbb].[table1]\nFROM [aaa].[table1]\nWHERE 1 != 1\n\nSELECT * \nINTO [bbb].[table2]\nFROM [aaa].[table2]\nWHERE 1 != 1\n\nSELECT * \nINTO [bbb].[table3]\nFROM [aaa].[table3]\nWHERE 1 != 1\n
\n
soup wrap:
Try this one -
Query:
DECLARE
@prime_schema SYSNAME = 'aaa'
, @next_schema SYSNAME = 'bbb'
DECLARE @SQL NVARCHAR(MAX)
SELECT @SQL = (
SELECT CHAR(13) + '
SELECT *
INTO [' + @next_schema + '].[' + o.name + ']
FROM [' + s.name + '].[' + o.name + ']
WHERE 1 != 1'
FROM sys.objects o WITH (NOWAIT)
JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U'
AND s.name = @prime_schema
AND o.name IN ('table1', 'table2', 'table3')
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
PRINT @SQL
Output:
SELECT *
INTO [bbb].[table1]
FROM [aaa].[table1]
WHERE 1 != 1
SELECT *
INTO [bbb].[table2]
FROM [aaa].[table2]
WHERE 1 != 1
SELECT *
INTO [bbb].[table3]
FROM [aaa].[table3]
WHERE 1 != 1
qid & accept id:
(16685165, 16686190)
query:
Sql dividing one record into to many records
soup:
Try this:
\nSELECT\n user_id,\n SUBSTRING_INDEX(tags,'<',2) as tag\nFROM\n t1\nUNION ALL\nSELECT\n user_id,\n SUBSTRING_INDEX(tags,'>',-2) as tag\nFROM\n t1\n
\nUPDATE: for distinct values you can use :
\nSELECT\n user_id,\n tag\nFROM (\n SELECT\n user_id,\n SUBSTRING_INDEX(tags,'<',2) as tag\n FROM\n t1\n UNION ALL\n SELECT\n user_id,\n SUBSTRING_INDEX(tags,'>',-2) as tag\n FROM\n t1\n) as tmp\n GROUP BY\n user_id,\n tag\n
\n
soup wrap:
Try this:
SELECT
user_id,
SUBSTRING_INDEX(tags,'<',2) as tag
FROM
t1
UNION ALL
SELECT
user_id,
SUBSTRING_INDEX(tags,'>',-2) as tag
FROM
t1
UPDATE: for distinct values you can use :
SELECT
user_id,
tag
FROM (
SELECT
user_id,
SUBSTRING_INDEX(tags,'<',2) as tag
FROM
t1
UNION ALL
SELECT
user_id,
SUBSTRING_INDEX(tags,'>',-2) as tag
FROM
t1
) as tmp
GROUP BY
user_id,
tag
qid & accept id:
(16688990, 16691059)
query:
How to display progress bar while executing big SQLCommand VB.Net
soup:
Here is a cut down example of how to do Asychrounous Work with VB.Net 4.0.
\nLets imagine you have a form that has the following imports,
\nImports System.Windows.Forms\nImports System.Threading\nImports System.Threading.Tasks\n
\nThat form has two controls
\nPrivate WithEvents DoSomthing As Button\nPrivate WithEvents Progress As ProgressBar\n
\nSomewhere in your application we have a Function called ExecuteSlowStuff, this function is the equivalent of your executeMyQuery. The important part is the Action parameter which the function uses to show it is making progress.
\nPrivate Shared Function ExecuteSlowStuff(ByVal progress As Action) As Integer\n Dim result = 0\n For i = 0 To 10000\n result += i\n Thread.Sleep(500)\n progress()\n Next\n\n Return result\nEnd Function\n
\nLets say this work is started by the click of the DoSomething Button.
\nPrivate Sub Start() Handled DoSomething.Click\n Dim slowStuff = Task(Of Integer).Factory.StartNew(\n Function() ExceuteSlowStuff(AddressOf Me.ShowProgress))\nEnd Sub\n
\nYou're probably wondering where ShowProgress comes from, that is the messier bit.
\nPrivate Sub ShowProgress()\n If Me.Progress.InvokeRequired Then\n Dim cross As new Action(AddressOf Me.ShowProgress)\n Me.Invoke(cross)\n Else \n If Me.Progress.Value = Me.Progress.Maximum Then\n Me.Progress.Value = Me.Progress.Minimum\n Else\n Me.Progress.Increment(1)\n End If\n\n Me.Progress.Refresh()\n End if\nEnd Sub\n
\nNote that because ShowProgress can be invoked from another thread, it checks for cross thread calls. In that case it invokes itself on the main thread.
\n
soup wrap:
Here is a cut down example of how to do Asychrounous Work with VB.Net 4.0.
Lets imagine you have a form that has the following imports,
Imports System.Windows.Forms
Imports System.Threading
Imports System.Threading.Tasks
That form has two controls
Private WithEvents DoSomthing As Button
Private WithEvents Progress As ProgressBar
Somewhere in your application we have a Function called ExecuteSlowStuff, this function is the equivalent of your executeMyQuery. The important part is the Action parameter which the function uses to show it is making progress.
Private Shared Function ExecuteSlowStuff(ByVal progress As Action) As Integer
Dim result = 0
For i = 0 To 10000
result += i
Thread.Sleep(500)
progress()
Next
Return result
End Function
Lets say this work is started by the click of the DoSomething Button.
Private Sub Start() Handled DoSomething.Click
Dim slowStuff = Task(Of Integer).Factory.StartNew(
Function() ExceuteSlowStuff(AddressOf Me.ShowProgress))
End Sub
You're probably wondering where ShowProgress comes from, that is the messier bit.
Private Sub ShowProgress()
If Me.Progress.InvokeRequired Then
Dim cross As new Action(AddressOf Me.ShowProgress)
Me.Invoke(cross)
Else
If Me.Progress.Value = Me.Progress.Maximum Then
Me.Progress.Value = Me.Progress.Minimum
Else
Me.Progress.Increment(1)
End If
Me.Progress.Refresh()
End if
End Sub
Note that because ShowProgress can be invoked from another thread, it checks for cross thread calls. In that case it invokes itself on the main thread.
qid & accept id:
(16756054, 16756109)
query:
Convert select result to column name in SQL Server
soup:
How about
\nSELECT \n CASE datename(dw,getdate())\n WHEN 'Monday' THEN Monday\n WHEN 'Tuesday' THEN Tuesday\n WHEN 'Wednesday' THEN Wednesday\n WHEN 'Thursday' THEN Thursday\n WHEN 'Friday' THEN Friday\n WHEN 'Saturday' THEN Saturday\n WHEN 'Sunday' THEN Sunday\n END today\n FROM @MyTemp\n WHERE Name = 'Test'\n
\nSample output:
\n| TODAY |\n---------\n| 09:30 |\n
\n\n
soup wrap:
How about
SELECT
CASE datename(dw,getdate())
WHEN 'Monday' THEN Monday
WHEN 'Tuesday' THEN Tuesday
WHEN 'Wednesday' THEN Wednesday
WHEN 'Thursday' THEN Thursday
WHEN 'Friday' THEN Friday
WHEN 'Saturday' THEN Saturday
WHEN 'Sunday' THEN Sunday
END today
FROM @MyTemp
WHERE Name = 'Test'
Sample output:
| TODAY |
---------
| 09:30 |
qid & accept id:
(16797418, 16797478)
query:
TSql Sum By Date
soup:
If you want the number of each records for each day:
\nSELECT DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DTTM\nOrder by DTTM desc\n
\nOr if you want a sum of a field on each record:
\nSELECT DTTM,SUM(field1) AS Sum\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DTTM\nOrder by DTTM desc\n
\nOr if DTTM is a datetime then you can use:
\nSELECT DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) AS DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM))\nOrder by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) desc\n
\nNewer versions of SQL Sever will support a Date type, so you can do this instead:
\nSELECT CAST(DTTM AS Date) AS DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by CAST(DTTM AS Date)\nOrder by CAST(DTTM AS Date) desc\n
\n
soup wrap:
If you want the number of each records for each day:
SELECT DTTM,COUNT(*) AS Total
FROM
[Audits].[dbo].[Miscount]
Group by DTTM
Order by DTTM desc
Or if you want a sum of a field on each record:
SELECT DTTM,SUM(field1) AS Sum
FROM
[Audits].[dbo].[Miscount]
Group by DTTM
Order by DTTM desc
Or if DTTM is a datetime then you can use:
SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) AS DTTM,COUNT(*) AS Total
FROM
[Audits].[dbo].[Miscount]
Group by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM))
Order by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) desc
Newer versions of SQL Sever will support a Date type, so you can do this instead:
SELECT CAST(DTTM AS Date) AS DTTM,COUNT(*) AS Total
FROM
[Audits].[dbo].[Miscount]
Group by CAST(DTTM AS Date)
Order by CAST(DTTM AS Date) desc
qid & accept id:
(16799445, 16799630)
query:
Select date + 3 days, not including weekends and holidays
soup:
EDIT:\nChanged to include non-workdays as valid fromDates.
\nWITH rankedDates AS\n (\n SELECT \n thedate\n , ROW_NUMBER()\n OVER(\n ORDER BY thedate\n ) dateRank\n FROM \n calendar c\n WHERE \n c.isweekday = 1 \n AND \n c.isholiday = 0\n )\nSELECT \n c1.fromdate\n , rd2.thedate todate\nFROM\n ( \n SELECT \n c.thedate fromDate\n , \n (\n SELECT \n TOP 1 daterank\n FROM \n rankedDates rd\n WHERE\n rd.thedate <= c.thedate\n ORDER BY \n thedate DESC\n ) dateRank\n FROM \n calendar c\n ) c1 \nLEFT JOIN\n rankedDates rd2\n ON \n c1.dateRank + 3 = rd2.dateRank \n
\nYou could put a date rank column on the calendar table to simplify this and avoid the CTE:
\nCREATE TABLE\n calendar\n (\n TheDate DATETIME PRIMARY KEY\n , isweekday BIT NOT NULL\n , isHoliday BIT NOT NULL DEFAULT 0\n , dateRank INT NOT NULL\n );\n
\nThen you'd set the daterank column only where it's a non-holiday weekday.
\n
soup wrap:
EDIT:
Changed to include non-workdays as valid fromDates.
WITH rankedDates AS
(
SELECT
thedate
, ROW_NUMBER()
OVER(
ORDER BY thedate
) dateRank
FROM
calendar c
WHERE
c.isweekday = 1
AND
c.isholiday = 0
)
SELECT
c1.fromdate
, rd2.thedate todate
FROM
(
SELECT
c.thedate fromDate
,
(
SELECT
TOP 1 daterank
FROM
rankedDates rd
WHERE
rd.thedate <= c.thedate
ORDER BY
thedate DESC
) dateRank
FROM
calendar c
) c1
LEFT JOIN
rankedDates rd2
ON
c1.dateRank + 3 = rd2.dateRank
You could put a date rank column on the calendar table to simplify this and avoid the CTE:
CREATE TABLE
calendar
(
TheDate DATETIME PRIMARY KEY
, isweekday BIT NOT NULL
, isHoliday BIT NOT NULL DEFAULT 0
, dateRank INT NOT NULL
);
Then you'd set the daterank column only where it's a non-holiday weekday.
qid & accept id:
(16874590, 16874868)
query:
Order SQL request when each row contains id of the next one
soup:
Solutions for SQL Server 2008-2012, PostgreSQL 9.1.9, Oracle 11g
\nActually, recursive CTE is a solution for almost all current RDBMS, including PostgreSQL (explanations and example shown below). However there is another better solution (optimized) for Oracle DBs: hierarchical queries.
\nNOCYCLE instructs Oracle to return rows even if your data has a loop in it.
\nCONNECT_BY_ROOT gives you access to the root element, even several layers down in the query.
\nUsing the HR schema:
\nThe corresponding code for Oracle 11g:
\nselect\nb.id_bus_line, b.id_bus_stop\nfrom BusLine_BusStop b\nstart with b.is_first_stop = 1\nconnect by nocycle prior b.id_next_bus_stop = b.id_bus_stop and prior b.id_bus_line = b.id_bus_line\n
\nDEMO for Oracle 11g (code of my own).
\nPlease note that the standard is recursive CTE in the SQL:1999 norm. As you can see, there are several differences between SQL Server and PostgreSQL.
\nThe following solution is for SQL Server 2012:
\n;WITH route AS\n(\n SELECT BusLineId, BusStopId, NextBusStopId\n FROM BusLine_BusStop\n WHERE IsFirstStop = 1\n UNION ALL\n SELECT b.BusLineId, b.BusStopId, b.NextBusStopId\n FROM BusLine_BusStop b\n INNER JOIN route r\n ON r.BusLineId = b.BusLineId\n AND r.NextBusStopId = b.BusStopId\n WHERE IsFirstStop = 0 or IsFirstStop is null\n)\nSELECT BusLineId, BusStopId\nFROM route\nORDER BY BusLineId\n
\nDEMO for SQL Server 2012 (inspired by T I).
\nAnd this one is for PostgreSQL 9.1.9 (it is not optimal but should work):
\nThe trick consists in the creation of a dedicated temporary sequence for the current session that you can reset.
\ncreate temp sequence rownum;\n\nWITH final_route AS\n(\n WITH RECURSIVE route AS\n (\n SELECT BusLineId, BusStopId, NextBusStopId\n FROM BusLine_BusStop\n WHERE IsFirstStop = 1\n UNION ALL\n SELECT b.BusLineId, b.BusStopId, b.NextBusStopId\n FROM BusLine_BusStop b\n INNER JOIN route r\n ON r.BusLineId = b.BusLineId\n AND r.NextBusStopId = b.BusStopId\n WHERE IsFirstStop = 0 or IsFirstStop is null\n )\n SELECT BusLineId, BusStopId, nextval('rownum') as rownum\n FROM route\n)\nSELECT BusLineId, BusStopId\nFROM final_route\nORDER BY BusLineId, rownum;\n
\nDEMO for PostgreSQL 9.1.9 of my own.
\nEDIT:
\nSorry for the multiple edits. It is quite uncommon to connect records by children record instead of by its parent.\nYou can avoid this poor representation by dropping your isFirstStop column and connecting your records using an id_PreviousBusStop column (if possible). In that case, you have to set id_PreviousBusStop to null for the first record.\nYou may save space (for fixed-length data, the entire space is still reserved). Moreover your queries will then become more efficient using less characters.
\n
soup wrap:
Solutions for SQL Server 2008-2012, PostgreSQL 9.1.9, Oracle 11g
Actually, recursive CTE is a solution for almost all current RDBMS, including PostgreSQL (explanations and example shown below). However there is another better solution (optimized) for Oracle DBs: hierarchical queries.
NOCYCLE instructs Oracle to return rows even if your data has a loop in it.
CONNECT_BY_ROOT gives you access to the root element, even several layers down in the query.
Using the HR schema:
The corresponding code for Oracle 11g:
select
b.id_bus_line, b.id_bus_stop
from BusLine_BusStop b
start with b.is_first_stop = 1
connect by nocycle prior b.id_next_bus_stop = b.id_bus_stop and prior b.id_bus_line = b.id_bus_line
DEMO for Oracle 11g (code of my own).
Please note that the standard is recursive CTE in the SQL:1999 norm. As you can see, there are several differences between SQL Server and PostgreSQL.
The following solution is for SQL Server 2012:
;WITH route AS
(
SELECT BusLineId, BusStopId, NextBusStopId
FROM BusLine_BusStop
WHERE IsFirstStop = 1
UNION ALL
SELECT b.BusLineId, b.BusStopId, b.NextBusStopId
FROM BusLine_BusStop b
INNER JOIN route r
ON r.BusLineId = b.BusLineId
AND r.NextBusStopId = b.BusStopId
WHERE IsFirstStop = 0 or IsFirstStop is null
)
SELECT BusLineId, BusStopId
FROM route
ORDER BY BusLineId
DEMO for SQL Server 2012 (inspired by T I).
And this one is for PostgreSQL 9.1.9 (it is not optimal but should work):
The trick consists in the creation of a dedicated temporary sequence for the current session that you can reset.
create temp sequence rownum;
WITH final_route AS
(
WITH RECURSIVE route AS
(
SELECT BusLineId, BusStopId, NextBusStopId
FROM BusLine_BusStop
WHERE IsFirstStop = 1
UNION ALL
SELECT b.BusLineId, b.BusStopId, b.NextBusStopId
FROM BusLine_BusStop b
INNER JOIN route r
ON r.BusLineId = b.BusLineId
AND r.NextBusStopId = b.BusStopId
WHERE IsFirstStop = 0 or IsFirstStop is null
)
SELECT BusLineId, BusStopId, nextval('rownum') as rownum
FROM route
)
SELECT BusLineId, BusStopId
FROM final_route
ORDER BY BusLineId, rownum;
DEMO for PostgreSQL 9.1.9 of my own.
EDIT:
Sorry for the multiple edits. It is quite uncommon to connect records by children record instead of by its parent.
You can avoid this poor representation by dropping your isFirstStop column and connecting your records using an id_PreviousBusStop column (if possible). In that case, you have to set id_PreviousBusStop to null for the first record.
You may save space (for fixed-length data, the entire space is still reserved). Moreover your queries will then become more efficient using less characters.
qid & accept id:
(16877276, 16877496)
query:
MySQL self inner joining and seaching in it
soup:
If I understand correctly you probably meant Fanta of Coca-Cola not vice versa.
\nSELECT p.id_product, \n CONCAT(p.name_product, ' of ', p1.name_product) name_product, \n p.has_choice, \n p.choice_id\n FROM products p JOIN products p1\n ON p.choice_id = p1.id_product\n
\nNote in that particular case INNER JOIN eliminates the need in has_choice to get products that are choices of parent products.
\nOutput:
\n| ID_PRODUCT | NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |\n-------------------------------------------------------------\n| 3 | Fanta of Coca-Cola | 0 | 2 |\n| 4 | Sprite of Coca-Cola | 0 | 2 |\n
\nHere is SQLFiddle demo.
\nUPDATE1 To get list of all products wether they are choices of product or not you need to use LEFT JOIN. To search in product names both parent product and choices use appropriate aliases of tables in WHERE clause.
\nSELECT p.id_product,\n CASE WHEN p1.id_product IS NULL THEN\n p.name_product\n ELSE\n CONCAT(p.name_product, ' of ', p1.name_product) \n END name_product, \n p.has_choice, \n p.choice_id\n FROM products p LEFT JOIN products p1 -- use LEFT JOIN here\n ON p.choice_id = p1.id_product\n WHERE p.has_choice = 0 -- filter out parent products\n AND (p.name_product LIKE '%a%' -- search in product name\n OR\n p1.name_product LIKE '%a%') -- search in product name of a parent product\n
\nCASE in that query allows to have plain product name for products that are not choices.
\nOutput:
\n| ID_PRODUCT | NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |\n-------------------------------------------------------------\n| 3 | Fanta of Coca-Cola | 0 | 2 |\n| 4 | Sprite of Coca-Cola | 0 | 2 |\n| 5 | Axion | 0 | 0 |\n
\nHere is SQLFiddle demo.
\n
soup wrap:
If I understand correctly you probably meant Fanta of Coca-Cola not vice versa.
SELECT p.id_product,
CONCAT(p.name_product, ' of ', p1.name_product) name_product,
p.has_choice,
p.choice_id
FROM products p JOIN products p1
ON p.choice_id = p1.id_product
Note in that particular case INNER JOIN eliminates the need in has_choice to get products that are choices of parent products.
Output:
| ID_PRODUCT | NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |
-------------------------------------------------------------
| 3 | Fanta of Coca-Cola | 0 | 2 |
| 4 | Sprite of Coca-Cola | 0 | 2 |
Here is SQLFiddle demo.
UPDATE1 To get list of all products wether they are choices of product or not you need to use LEFT JOIN. To search in product names both parent product and choices use appropriate aliases of tables in WHERE clause.
SELECT p.id_product,
CASE WHEN p1.id_product IS NULL THEN
p.name_product
ELSE
CONCAT(p.name_product, ' of ', p1.name_product)
END name_product,
p.has_choice,
p.choice_id
FROM products p LEFT JOIN products p1 -- use LEFT JOIN here
ON p.choice_id = p1.id_product
WHERE p.has_choice = 0 -- filter out parent products
AND (p.name_product LIKE '%a%' -- search in product name
OR
p1.name_product LIKE '%a%') -- search in product name of a parent product
CASE in that query allows to have plain product name for products that are not choices.
Output:
| ID_PRODUCT | NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |
-------------------------------------------------------------
| 3 | Fanta of Coca-Cola | 0 | 2 |
| 4 | Sprite of Coca-Cola | 0 | 2 |
| 5 | Axion | 0 | 0 |
Here is SQLFiddle demo.
qid & accept id:
(16887108, 16887184)
query:
How to specify a foreign key?
soup:
Use foreign_key option:
\nhas_many :posts, :foreign_key => :poster_id\n
\nFor Post model it will be
\nbelongs_to :user, :foreign_key => :poster_id\n
\nor
\nbelongs_to :poster, :class_name => 'User'\n
\n
soup wrap:
Use foreign_key option:
has_many :posts, :foreign_key => :poster_id
For Post model it will be
belongs_to :user, :foreign_key => :poster_id
or
belongs_to :poster, :class_name => 'User'
qid & accept id:
(16895364, 16895443)
query:
Select value which don't have atleast one association
soup:
Try this one:
\nSELECT * FROM Table1\nWHERE item_id IN ( \n SELECT item_id FROM Table1\n GROUP BY item_id\n HAVING MAX(category_id) = 0\n )\n
\nResult:
\n╔═════════╦═════════════╗\n║ ITEM_ID ║ CATEGORY_ID ║\n╠═════════╬═════════════╣\n║ 4 ║ 0 ║\n║ 5 ║ 0 ║\n╚═════════╩═════════════╝\n
\nSee this SQLFiddle
\nYou can use DISTINCT keyword if you don't want duplicate rows in the result:
\nSELECT DISTINCT * FROM Table1\nWHERE item_id IN ( \n SELECT item_id FROM Table1\n GROUP BY item_id\n HAVING MAX(category_id) = 0\n );\n
\nSee this SQLFiddle for more details.
\n
soup wrap:
Try this one:
SELECT * FROM Table1
WHERE item_id IN (
SELECT item_id FROM Table1
GROUP BY item_id
HAVING MAX(category_id) = 0
)
Result:
╔═════════╦═════════════╗
║ ITEM_ID ║ CATEGORY_ID ║
╠═════════╬═════════════╣
║ 4 ║ 0 ║
║ 5 ║ 0 ║
╚═════════╩═════════════╝
See this SQLFiddle
You can use DISTINCT keyword if you don't want duplicate rows in the result:
SELECT DISTINCT * FROM Table1
WHERE item_id IN (
SELECT item_id FROM Table1
GROUP BY item_id
HAVING MAX(category_id) = 0
);
See this SQLFiddle for more details.
qid & accept id:
(16914206, 16914288)
query:
SQL SELECT statement when using look up table
soup:
You need to use multiple joins to work across the relationships.
\nselect e.id, e.name, e.startDate, r.RoleName \nfrom employee e \njoin user_roles ur\non e.id = ur.employee_id\njoin roles r\non r.id = ur.role_id\n
\nFull Example
\n/*DDL*/\n\ncreate table EMPLOYEE(\n ID int,\n Name varchar(50),\n StartDate date\n);\n\ncreate table USER_ROLES(\n Employee_ID int,\n Role_ID int\n);\n\ncreate table Roles(\n ID int,\n RoleName varchar(50)\n);\n\ninsert into EMPLOYEE values(1, 'Jon Skeet', '2013-03-04');\ninsert into USER_ROLES values (1,1);\ninsert into ROLES values(1, 'Superman');\n\n/* Query */\nselect e.id, e.name, e.startDate, r.RoleName \nfrom employee e \njoin user_roles ur\non e.id = ur.employee_id\njoin roles r\non r.id = ur.role_id;\n
\n\n\n
soup wrap:
You need to use multiple joins to work across the relationships.
select e.id, e.name, e.startDate, r.RoleName
from employee e
join user_roles ur
on e.id = ur.employee_id
join roles r
on r.id = ur.role_id
Full Example
/*DDL*/
create table EMPLOYEE(
ID int,
Name varchar(50),
StartDate date
);
create table USER_ROLES(
Employee_ID int,
Role_ID int
);
create table Roles(
ID int,
RoleName varchar(50)
);
insert into EMPLOYEE values(1, 'Jon Skeet', '2013-03-04');
insert into USER_ROLES values (1,1);
insert into ROLES values(1, 'Superman');
/* Query */
select e.id, e.name, e.startDate, r.RoleName
from employee e
join user_roles ur
on e.id = ur.employee_id
join roles r
on r.id = ur.role_id;
qid & accept id:
(16930761, 16930783)
query:
Oracle 10g SQL: Return true if a column has only a value, but > 1 rows in a table
soup:
You want an aggregation with a case statement. The following query checks for multiple values (assuming no NULLs):
\nselect (case when count(distinct Reference) = 1 then 'TRUE'\n else 'FALSE'\n end)\nfrom t\n
\nIf you really need the multiple rows as well:
\nselect (case when count(distinct Reference) = 1 and count(*) > 1 then 'TRUE'\n else 'FALSE'\n end)\nfrom t\n
\n
soup wrap:
You want an aggregation with a case statement. The following query checks for multiple values (assuming no NULLs):
select (case when count(distinct Reference) = 1 then 'TRUE'
else 'FALSE'
end)
from t
If you really need the multiple rows as well:
select (case when count(distinct Reference) = 1 and count(*) > 1 then 'TRUE'
else 'FALSE'
end)
from t
qid & accept id:
(16962915, 16963367)
query:
Select id on grouped unique set of data
soup:
Although SQLite has group_concat(), it won't help here because the order of the concatenated elements is arbitrary. That is the easiest way to do this.
\nInstead, we have to think of this relationally. The idea is to do the following:
\n\n- Count the number of colors that two ids have in common
\n- Count the number of colors on each id
\n- Select id pairs where these three values are equal
\n- Identify each pair by the minimum id in the pair
\n
\nThen distinct values of the minimum are the list you want.
\nThe following query takes this approach:
\nselect distinct MIN(id2)\nfrom (select t1.id as id1, t2.id as id2, count(*) as cnt\n from t t1 join\n t t2\n on t1.color = t2.color\n group by t1.id, t2.id\n ) t1t2 join\n (select t.id, COUNT(*) as cnt\n from t\n group by t.id\n ) t1sum\n on t1t2.id1 = t1sum.id and t1sum.cnt = t1t2.cnt join\n (select t.id, COUNT(*) as cnt\n from t\n group by t.id\n ) t2sum\n on t1t2.id2 = t2sum.id and t2sum.cnt = t1t2.cnt\ngroup by t1t2.id1, t1t2.cnt, t1sum.cnt, t2sum.cnt\n
\nI actually tested this in SQL Server by placing this with clause in front:
\nwith t as (\n select 1 as id, 'r' as color union all\n select 1, 'g' union all\n select 1, 'b' union all\n select 2 as id, 'r' as color union all\n select 2, 'g' union all\n select 2, 'b' union all\n select 3, 'r' union all\n select 4, 'y' union all\n select 4, 'p' union all\n select 5 as id, 'r' as color union all\n select 5, 'g' union all\n select 5, 'b' union all\n select 5, 'p'\n )\n
\n
soup wrap:
Although SQLite has group_concat(), it won't help here because the order of the concatenated elements is arbitrary. That is the easiest way to do this.
Instead, we have to think of this relationally. The idea is to do the following:
- Count the number of colors that two ids have in common
- Count the number of colors on each id
- Select id pairs where these three values are equal
- Identify each pair by the minimum id in the pair
Then distinct values of the minimum are the list you want.
The following query takes this approach:
select distinct MIN(id2)
from (select t1.id as id1, t2.id as id2, count(*) as cnt
from t t1 join
t t2
on t1.color = t2.color
group by t1.id, t2.id
) t1t2 join
(select t.id, COUNT(*) as cnt
from t
group by t.id
) t1sum
on t1t2.id1 = t1sum.id and t1sum.cnt = t1t2.cnt join
(select t.id, COUNT(*) as cnt
from t
group by t.id
) t2sum
on t1t2.id2 = t2sum.id and t2sum.cnt = t1t2.cnt
group by t1t2.id1, t1t2.cnt, t1sum.cnt, t2sum.cnt
I actually tested this in SQL Server by placing this with clause in front:
with t as (
select 1 as id, 'r' as color union all
select 1, 'g' union all
select 1, 'b' union all
select 2 as id, 'r' as color union all
select 2, 'g' union all
select 2, 'b' union all
select 3, 'r' union all
select 4, 'y' union all
select 4, 'p' union all
select 5 as id, 'r' as color union all
select 5, 'g' union all
select 5, 'b' union all
select 5, 'p'
)
qid & accept id:
(16971556, 16974128)
query:
Convert numeric to string inside a user-defined function
soup:
Converting numeric to text is the least of your problems.
\n\nMy purpose is to define a new variable "x%" as its name, with x\n varying as the function input.
\n
\n\nFirst of all: there are no variables in an SQL function. SQL functions are just wrappers for valid SQL statements. Input and output parameters can be named, but names are static, not dynamic.
\nYou may be thinking of a PL/pgSQL function, where you have procedural elements including variables. Parameter names are still static, though. There are no dynamic variable names in plpgsql. You can execute dynamic SQL with EXECUTE but that's something different entirely.
\nWhile it is possible to declare a static variable with a name like "123%" it is really exceptionally uncommon to do so. Maybe for deliberately obfuscating code? Other than that: Don't. Use proper, simple, legal, lower case variable names without the need to double-quote and without the potential to do something unexpected after a typo.
\nSince the window function ntile() returns integer and you run an equality check on the result, the input parameter should be integer, not numeric.
\nTo assign a variable in plpgsql you can use the assignment operator := for a single variable or SELECT INTO for any number of variables. Either way, you want the query to return a single row or you have to loop.
\nIf you want the maximum billed from the chosen percentile, you don't GROUP BY x, y. That might return multiple rows and does not do what you seem to want. Use plain max(billed) without GROUP BY to get a single row.
\nYou don't need to double quote perfectly legal column names.
\n
\nA valid function might look like this. It's not exactly what you were trying to do, which cannot be done. But it may get you closer to what you actually need.
\n\nCREATE OR REPLACE FUNCTION ntile_loop(x integer)\nRETURNS SETOF numeric as \n$func$\nDECLARE\n myvar text;\nBEGIN\n\nSELECT INTO myvar max(billed)\nFROM (\n SELECT billed, id, cm\n ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile\n FROM table_all\n ) sub\nWHERE sub.tile = $1;\n\n-- do something with myvar, depending on the value of $1 ...\nEND\n$func$ LANGUAGE plpgsql;\n
\nLong story short, you need to study the basics before you try to create sophisticated functions.
\nPlain SQL
\nAfter Q update:
\n\nI'd like to calculate 5, 10, 20, 30, ....90th percentile and display\n all of them in the same table for each id+cm group.
\n
\nThis simple query should do it all:
\nSELECT id, cm, tile, max(billed) AS max_billed\nFROM (\n SELECT billed, id, cm\n ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile\n FROM table_all\n ) sub\nWHERE (tile%10 = 0 OR tile = 5)\nAND tile <= 90\nGROUP BY 1,2,3\nORDER BY 1,2,3;\n
\n% .. modulo operator
\nGROUP BY 1,2,3 .. positional parameter
\n
soup wrap:
Converting numeric to text is the least of your problems.
My purpose is to define a new variable "x%" as its name, with x
varying as the function input.
First of all: there are no variables in an SQL function. SQL functions are just wrappers for valid SQL statements. Input and output parameters can be named, but names are static, not dynamic.
You may be thinking of a PL/pgSQL function, where you have procedural elements including variables. Parameter names are still static, though. There are no dynamic variable names in plpgsql. You can execute dynamic SQL with EXECUTE but that's something different entirely.
While it is possible to declare a static variable with a name like "123%" it is really exceptionally uncommon to do so. Maybe for deliberately obfuscating code? Other than that: Don't. Use proper, simple, legal, lower case variable names without the need to double-quote and without the potential to do something unexpected after a typo.
Since the window function ntile() returns integer and you run an equality check on the result, the input parameter should be integer, not numeric.
To assign a variable in plpgsql you can use the assignment operator := for a single variable or SELECT INTO for any number of variables. Either way, you want the query to return a single row or you have to loop.
If you want the maximum billed from the chosen percentile, you don't GROUP BY x, y. That might return multiple rows and does not do what you seem to want. Use plain max(billed) without GROUP BY to get a single row.
You don't need to double quote perfectly legal column names.
A valid function might look like this. It's not exactly what you were trying to do, which cannot be done. But it may get you closer to what you actually need.
CREATE OR REPLACE FUNCTION ntile_loop(x integer)
RETURNS SETOF numeric as
$func$
DECLARE
myvar text;
BEGIN
SELECT INTO myvar max(billed)
FROM (
SELECT billed, id, cm
,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
FROM table_all
) sub
WHERE sub.tile = $1;
-- do something with myvar, depending on the value of $1 ...
END
$func$ LANGUAGE plpgsql;
Long story short, you need to study the basics before you try to create sophisticated functions.
Plain SQL
After Q update:
I'd like to calculate 5, 10, 20, 30, ....90th percentile and display
all of them in the same table for each id+cm group.
This simple query should do it all:
SELECT id, cm, tile, max(billed) AS max_billed
FROM (
SELECT billed, id, cm
,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
FROM table_all
) sub
WHERE (tile%10 = 0 OR tile = 5)
AND tile <= 90
GROUP BY 1,2,3
ORDER BY 1,2,3;
% .. modulo operator
GROUP BY 1,2,3 .. positional parameter
qid & accept id:
(17003542, 17003759)
query:
How to compile multiple stored procedures from a single file?
soup:
@/path/main_script.sql:\nSTART script_one.sql\nSTART script_two.sql\nSTART script_three.sql\nSTART script_four.sql\nSTART script_five.sql\n
\nOR
\n@/path/main_script.sql:\n@@/path/script_one.sql\n@@/path/script_two.sql\n@@/path/script_three.sql\n@@/path/script_four.sql\n@@/path/script_five.sql\n
\n
soup wrap:
@/path/main_script.sql:
START script_one.sql
START script_two.sql
START script_three.sql
START script_four.sql
START script_five.sql
OR
@/path/main_script.sql:
@@/path/script_one.sql
@@/path/script_two.sql
@@/path/script_three.sql
@@/path/script_four.sql
@@/path/script_five.sql
qid & accept id:
(17017125, 17017223)
query:
How to use "Group By" for date interval in postgres
soup:
You want to use the count aggregate as a window function, eg count(id) over (partition by event_date rows 3 preceeding)... but it's greatly complicated by the nature of your data. You're storing timestamps, not just dates, and you want to group by day not by number of previous events. To top it all off, you want to cross-tabulate the results.
\nIf PostgreSQL supported RANGE in window functions this would be considerably simpler than it is. As it is, you have to do it the hard way.
\nYou can then filter that through a window to get the per-event per-day lagged counts ... except that your event days aren't contiguous and unfortunately PostgreSQL window functions only support ROWS, not RANGE, so you have to join across a generated series of dates first.
\nWITH\n/* First, get a listing of event counts by day */\nevent_days(event_name, event_day, event_day_count) AS (\n SELECT event_name, date_trunc('day', event_date), count(id)\n FROM Table1\n GROUP BY event_name, date_trunc('day', event_date)\n ORDER BY date_trunc('day', event_date), event_name\n),\n/* \n * Then fill in zeros for any days within the range that didn't have any events.\n * If PostgreSQL supported RANGE windows, not just ROWS, we could get rid of this/\n */\nevent_days_contiguous(event_name, event_day, event_day_count) AS (\n SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)\n FROM generate_series( (SELECT min(event_day)::date FROM event_days), (SELECT max(event_day)::date FROM event_days), INTERVAL '1' DAY ) gen_day\n CROSS JOIN (SELECT DISTINCT event_name FROM event_days) event_names(event_name)\n LEFT OUTER JOIN event_days ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)\n),\n/*\n * Get the lagged counts by using the sum() function over a row window...\n */\nlagged_days(event_name, event_day_first, event_day_last, event_days_count) AS (\n SELECT event_name, event_day, first_value(event_day) OVER w, sum(event_day_count) OVER w\n FROM event_days_contiguous\n WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING)\n)\n/* Now do a manual pivot. For arbitrary column counts use an external tool\n * or check out the 'crosstab' function in the 'tablefunc' contrib module \n */\nSELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"\nFROM lagged_days d1\nINNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')\nORDER BY d1.event_day_first;\n
\nOutput with the sample data:
\n event_day_first | Event_A | Event_B \n------------------------+---------+---------\n 2013-04-24 00:00:00+08 | 2 | 1\n 2013-04-25 00:00:00+08 | 4 | 1\n 2013-04-26 00:00:00+08 | 3 | 0\n 2013-04-27 00:00:00+08 | 2 | 1\n(4 rows)\n
\nYou can potentially make the query faster but much uglier by combining the three CTE clauses into a nested query using FROM (SELECT...) and wrapping them in a view instead of a CTE for use from the outer query. This will allow Pg to "push down" predicates into the queries, greatly reducing the data you have to work with when querying subsets of the data.
\nSQLFiddle doesn't seem to be working at the moment, but here's the demo setup I used:
\nCREATE TABLE Table1 \n(id integer primary key, "event_date" timestamp not null, "event_name" text);\n\nINSERT INTO Table1\n("id", "event_date", "event_name")\nVALUES\n(101, '2013-04-24 18:33:37', 'event_A'),\n(102, '2013-04-24 20:34:37', 'event_B'),\n(103, '2013-04-24 20:40:37', 'event_A'),\n(104, '2013-04-25 01:00:00', 'event_A'),\n(105, '2013-04-25 12:00:15', 'event_A'),\n(106, '2013-04-26 00:56:10', 'event_A'),\n(107, '2013-04-27 12:00:15', 'event_A'),\n(108, '2013-04-27 12:00:15', 'event_B');\n
\nI changed the ID of the last entry from 107 to 108, as I suspect that was just an error in your manual editing.
\nHere's how to express it as a view instead:
\nCREATE VIEW lagged_days AS\nSELECT event_name, event_day AS event_day_first, sum(event_day_count) OVER w AS event_days_count \nFROM (\n SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)\n FROM generate_series( (SELECT min(event_date)::date FROM Table1), (SELECT max(event_date)::date FROM Table1), INTERVAL '1' DAY ) gen_day\n CROSS JOIN (SELECT DISTINCT event_name FROM Table1) event_names(event_name)\n LEFT OUTER JOIN (\n SELECT event_name, date_trunc('day', event_date), count(id)\n FROM Table1\n GROUP BY event_name, date_trunc('day', event_date)\n ORDER BY date_trunc('day', event_date), event_name\n ) event_days(event_name, event_day, event_day_count)\n ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)\n) event_days_contiguous(event_name, event_day, event_day_count)\nWINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING);\n
\nYou can then use the view in whatever crosstab queries you want to write. It'll work with the prior hand-crosstab query:
\nSELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"\nFROM lagged_days d1\nINNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')\nORDER BY d1.event_day_first;\n
\n... or using crosstab from the tablefunc extension, which I'll let you study up on.
\nFor a laugh, here's the explain on the above view-based query: http://explain.depesz.com/s/nvUq
\n
soup wrap:
You want to use the count aggregate as a window function, eg count(id) over (partition by event_date rows 3 preceeding)... but it's greatly complicated by the nature of your data. You're storing timestamps, not just dates, and you want to group by day not by number of previous events. To top it all off, you want to cross-tabulate the results.
If PostgreSQL supported RANGE in window functions this would be considerably simpler than it is. As it is, you have to do it the hard way.
You can then filter that through a window to get the per-event per-day lagged counts ... except that your event days aren't contiguous and unfortunately PostgreSQL window functions only support ROWS, not RANGE, so you have to join across a generated series of dates first.
WITH
/* First, get a listing of event counts by day */
event_days(event_name, event_day, event_day_count) AS (
SELECT event_name, date_trunc('day', event_date), count(id)
FROM Table1
GROUP BY event_name, date_trunc('day', event_date)
ORDER BY date_trunc('day', event_date), event_name
),
/*
* Then fill in zeros for any days within the range that didn't have any events.
* If PostgreSQL supported RANGE windows, not just ROWS, we could get rid of this/
*/
event_days_contiguous(event_name, event_day, event_day_count) AS (
SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)
FROM generate_series( (SELECT min(event_day)::date FROM event_days), (SELECT max(event_day)::date FROM event_days), INTERVAL '1' DAY ) gen_day
CROSS JOIN (SELECT DISTINCT event_name FROM event_days) event_names(event_name)
LEFT OUTER JOIN event_days ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)
),
/*
* Get the lagged counts by using the sum() function over a row window...
*/
lagged_days(event_name, event_day_first, event_day_last, event_days_count) AS (
SELECT event_name, event_day, first_value(event_day) OVER w, sum(event_day_count) OVER w
FROM event_days_contiguous
WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING)
)
/* Now do a manual pivot. For arbitrary column counts use an external tool
* or check out the 'crosstab' function in the 'tablefunc' contrib module
*/
SELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"
FROM lagged_days d1
INNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')
ORDER BY d1.event_day_first;
Output with the sample data:
event_day_first | Event_A | Event_B
------------------------+---------+---------
2013-04-24 00:00:00+08 | 2 | 1
2013-04-25 00:00:00+08 | 4 | 1
2013-04-26 00:00:00+08 | 3 | 0
2013-04-27 00:00:00+08 | 2 | 1
(4 rows)
You can potentially make the query faster but much uglier by combining the three CTE clauses into a nested query using FROM (SELECT...) and wrapping them in a view instead of a CTE for use from the outer query. This will allow Pg to "push down" predicates into the queries, greatly reducing the data you have to work with when querying subsets of the data.
SQLFiddle doesn't seem to be working at the moment, but here's the demo setup I used:
CREATE TABLE Table1
(id integer primary key, "event_date" timestamp not null, "event_name" text);
INSERT INTO Table1
("id", "event_date", "event_name")
VALUES
(101, '2013-04-24 18:33:37', 'event_A'),
(102, '2013-04-24 20:34:37', 'event_B'),
(103, '2013-04-24 20:40:37', 'event_A'),
(104, '2013-04-25 01:00:00', 'event_A'),
(105, '2013-04-25 12:00:15', 'event_A'),
(106, '2013-04-26 00:56:10', 'event_A'),
(107, '2013-04-27 12:00:15', 'event_A'),
(108, '2013-04-27 12:00:15', 'event_B');
I changed the ID of the last entry from 107 to 108, as I suspect that was just an error in your manual editing.
Here's how to express it as a view instead:
CREATE VIEW lagged_days AS
SELECT event_name, event_day AS event_day_first, sum(event_day_count) OVER w AS event_days_count
FROM (
SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)
FROM generate_series( (SELECT min(event_date)::date FROM Table1), (SELECT max(event_date)::date FROM Table1), INTERVAL '1' DAY ) gen_day
CROSS JOIN (SELECT DISTINCT event_name FROM Table1) event_names(event_name)
LEFT OUTER JOIN (
SELECT event_name, date_trunc('day', event_date), count(id)
FROM Table1
GROUP BY event_name, date_trunc('day', event_date)
ORDER BY date_trunc('day', event_date), event_name
) event_days(event_name, event_day, event_day_count)
ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)
) event_days_contiguous(event_name, event_day, event_day_count)
WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING);
You can then use the view in whatever crosstab queries you want to write. It'll work with the prior hand-crosstab query:
SELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"
FROM lagged_days d1
INNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')
ORDER BY d1.event_day_first;
... or using crosstab from the tablefunc extension, which I'll let you study up on.
For a laugh, here's the explain on the above view-based query: http://explain.depesz.com/s/nvUq
qid & accept id:
(17025457, 17025549)
query:
MIN/MAX price for each product (query)
soup:
First, when you use join, you should always have an on clause, even though MySQL does not require this. If you want a cross join, then be explicit about it.
\nSecond, you don't use the tm_markets table at all in the query. It is not needed, so remove it.
\nThe resulting query should work:
\nSELECT MIN(`map`.`Product_Price`) as `minProductPrice`,\n MAX(`map`.`Product_Price`) as `maxProductPrice`,\n `pr`.`Product_Name` as `productName`\nFROM `bm_market_products` `map` join\n `bm_products` as `pr`\n on map`.`Product_Id` = `pr`.`Product_Id`\nWHERE `map`.`Product_Id` = 1 \n
\nBecause you are only choosing one product, a group by is probably not necessary. You might consider this, however:
\nSELECT MIN(`map`.`Product_Price`) as `minProductPrice`,\n MAX(`map`.`Product_Price`) as `maxProductPrice`,\n `pr`.`Product_Name` as `productName`\nFROM `bm_market_products` `map` join\n `bm_products` as `pr`\n on map`.`Product_Id` = `pr`.`Product_Id`\ngroup by `map`.`Product_Id`\n
\nThat will return the information for all products.
\n
soup wrap:
First, when you use join, you should always have an on clause, even though MySQL does not require this. If you want a cross join, then be explicit about it.
Second, you don't use the tm_markets table at all in the query. It is not needed, so remove it.
The resulting query should work:
SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,
MAX(`map`.`Product_Price`) as `maxProductPrice`,
`pr`.`Product_Name` as `productName`
FROM `bm_market_products` `map` join
`bm_products` as `pr`
on map`.`Product_Id` = `pr`.`Product_Id`
WHERE `map`.`Product_Id` = 1
Because you are only choosing one product, a group by is probably not necessary. You might consider this, however:
SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,
MAX(`map`.`Product_Price`) as `maxProductPrice`,
`pr`.`Product_Name` as `productName`
FROM `bm_market_products` `map` join
`bm_products` as `pr`
on map`.`Product_Id` = `pr`.`Product_Id`
group by `map`.`Product_Id`
That will return the information for all products.
qid & accept id:
(17043777, 17044496)
query:
Is it possible to get results, and count of the results, at the same time? (to filter results based on the result count)
soup:
The following query:
\nSELECT id, related_info, count(related_info)\nFROM my_table\nWHERE \ngroup by id, related_info with rollup\n
\nwould produce results like:
\nid | related_info | count(related_info)|\n1 | info1| 1|\n1 | info2| 1|\n1 | info3| 1|\n1 | NULL | 3|\n
\nrollup adds an extra row with the summary information.
\nThe solution is easy in most databases:
\nSELECT id, related_info, count(related_info) over (partition by id)\nFROM my_table\nWHERE \n
\nGetting the equivalent in MySQL without repeating the where clause is challenging.
\nA typical alternative in MySQL, if you need the list of "related_info" is to use group_concat:
\nselect id, group_concat(related_info), count(*)\nfrom my_table\nwhere \ngroup by id;\n
\nAnd a final method, assuming that related_info is a single column that uniquely identifies each row:
\nselect mt.id, mt.related_info, t.cnt\nfrom my_table mt join\n (select id, group_concat(related_info) as relatedInfoList, count(*) as cnt\n from my_table\n where \n group by id\n ) t\n on mt.id = t.id and\n find_in_set(related_info, relatedInfoList) > 0\n
\nThis turns "related_info" into a list and then matches back to the original data. This can also be done with a unique id in the original data (which id is not based on the sample data).
\n
soup wrap:
The following query:
SELECT id, related_info, count(related_info)
FROM my_table
WHERE
group by id, related_info with rollup
would produce results like:
id | related_info | count(related_info)|
1 | info1| 1|
1 | info2| 1|
1 | info3| 1|
1 | NULL | 3|
rollup adds an extra row with the summary information.
The solution is easy in most databases:
SELECT id, related_info, count(related_info) over (partition by id)
FROM my_table
WHERE
Getting the equivalent in MySQL without repeating the where clause is challenging.
A typical alternative in MySQL, if you need the list of "related_info" is to use group_concat:
select id, group_concat(related_info), count(*)
from my_table
where
group by id;
And a final method, assuming that related_info is a single column that uniquely identifies each row:
select mt.id, mt.related_info, t.cnt
from my_table mt join
(select id, group_concat(related_info) as relatedInfoList, count(*) as cnt
from my_table
where
group by id
) t
on mt.id = t.id and
find_in_set(related_info, relatedInfoList) > 0
This turns "related_info" into a list and then matches back to the original data. This can also be done with a unique id in the original data (which id is not based on the sample data).
qid & accept id:
(17044086, 17044133)
query:
sum of two different rows(salary) in a table
soup:
try to use the coalesce operator.
\nselect sum(coalesce(columna, 0) + coalesce(columnb, 0))\n
\ncause if any part is null, result will be null.
\nif you're talking of row instead of columns :
\nSELECT SUM(Salary)\nFROM yourTable\nWHERE Name IN ('Smith', 'Wong')\nGROUP BY Name\n
\n
soup wrap:
try to use the coalesce operator.
select sum(coalesce(columna, 0) + coalesce(columnb, 0))
cause if any part is null, result will be null.
if you're talking of row instead of columns :
SELECT SUM(Salary)
FROM yourTable
WHERE Name IN ('Smith', 'Wong')
GROUP BY Name
qid & accept id:
(17057129, 17057798)
query:
MongoDB - How to Determine Date Created for Dynamically Created DBs and Collections?
soup:
For database: \nYou can check the creation time for "database-name.ns" file
\nls -l test.ns\n-rw------- 1 root root 16777216 Jun 12 07:10 test.ns\n
\nFor collection:\nMost of time collection is created when you insert something into it. So, if you are not creating the collection using createCollection() command and you are using the default ObjectId for _id key, then you can get a rough estimate of the creation of the collection by knowing the time at which the first document inserted in that collection.
\nMongo > db.test.find().sort({$natural : 1}).limit(1).toArray()[0]._id.getTimestamp()\nISODate("2013-06-12T01:40:04Z")\n
\n
soup wrap:
For database:
You can check the creation time for "database-name.ns" file
ls -l test.ns
-rw------- 1 root root 16777216 Jun 12 07:10 test.ns
For collection:
Most of time collection is created when you insert something into it. So, if you are not creating the collection using createCollection() command and you are using the default ObjectId for _id key, then you can get a rough estimate of the creation of the collection by knowing the time at which the first document inserted in that collection.
Mongo > db.test.find().sort({$natural : 1}).limit(1).toArray()[0]._id.getTimestamp()
ISODate("2013-06-12T01:40:04Z")
qid & accept id:
(17070859, 17070904)
query:
SQL Server inserting Date as 1/1/1900
soup:
You have not given it as null, you're trying to insert an empty string (''). You need:
\nINSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) \nVALUES ('203', '6/12/2013','N/A', NULL) \n
\nAlthough really, if you're going to be inserting dates, best to insert them in YYYYMMDD format, as:
\nINSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) \nVALUES ('203', '20130612','N/A', NULL) \n
\n
soup wrap:
You have not given it as null, you're trying to insert an empty string (''). You need:
INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate])
VALUES ('203', '6/12/2013','N/A', NULL)
Although really, if you're going to be inserting dates, best to insert them in YYYYMMDD format, as:
INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate])
VALUES ('203', '20130612','N/A', NULL)
qid & accept id:
(17073134, 17073196)
query:
SQL server join tables and pivot
soup:
This should work:
\nWITH Sales AS (\n SELECT\n S.SaleID,\n S.SoldBy,\n S.SalePrice,\n S.Margin,\n S.Date,\n I.SalePrice,\n I.Category\n FROM\n dbo.Sale S\n INNER JOIN dbo.SaleItem I\n ON S.SaleID = I.SaleID\n)\nSELECT *\nFROM\n Sales\n PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P\n;\n
\nOr alternately:
\nSELECT\n S.SaleID,\n S.SoldBy,\n S.SalePrice,\n S.Margin,\n S.Date,\n I.Books,\n I.Printing,\n I.DVD\nFROM\n dbo.Sale S\n INNER JOIN (\n SELECT *\n FROM\n (SELECT SaleID, SalePrice, Category FROM dbo.SaleItem) I\n PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P\n ) I ON S.SaleID = I.SaleID\n;\n
\nThese have the same resultset and may in fact be treated the same by the query optimizer, but possibly not. The big difference comes into play when you start putting conditions on the Sale table--you should test and see which query works better.
\nMay I suggest, however, that you do the pivoting in the presentation layer? If, for example, you are using SSRS it is quite easy to use a matrix control that will do all the pivoting for you. That is best, because then if you add a new Category, you won't have modify all your SQL code!
\nThere is a way to dynamically find the column names to pivot, but it involves dynamic SQL. I don't really recommend that as the best way, either, though it is possible.
\nAnother way that could work would be to preprocess this query--meaning to set a trigger on the Category table that rewrites a VIEW to contain all the extant categories that exist. This does solve a lot of the other problems I've mentioned, but again, using the presentation layer is best.
\nNote: If your column names (that were formerly values) are numbers or begin with a number, you must quote them with square brackets as in PIVOT (Max(Value) FOR CategoryId IN ([1], [2], [3], [4])) P. Alternately, you can modify the values before they get to the PIVOT part of the query to prepend some letters, so that the column list doesn't need escaping. For further reading on this check out the rules for identifiers in SQL Server.
\n
soup wrap:
This should work:
WITH Sales AS (
SELECT
S.SaleID,
S.SoldBy,
S.SalePrice,
S.Margin,
S.Date,
I.SalePrice,
I.Category
FROM
dbo.Sale S
INNER JOIN dbo.SaleItem I
ON S.SaleID = I.SaleID
)
SELECT *
FROM
Sales
PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P
;
Or alternately:
SELECT
S.SaleID,
S.SoldBy,
S.SalePrice,
S.Margin,
S.Date,
I.Books,
I.Printing,
I.DVD
FROM
dbo.Sale S
INNER JOIN (
SELECT *
FROM
(SELECT SaleID, SalePrice, Category FROM dbo.SaleItem) I
PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P
) I ON S.SaleID = I.SaleID
;
These have the same resultset and may in fact be treated the same by the query optimizer, but possibly not. The big difference comes into play when you start putting conditions on the Sale table--you should test and see which query works better.
May I suggest, however, that you do the pivoting in the presentation layer? If, for example, you are using SSRS it is quite easy to use a matrix control that will do all the pivoting for you. That is best, because then if you add a new Category, you won't have modify all your SQL code!
There is a way to dynamically find the column names to pivot, but it involves dynamic SQL. I don't really recommend that as the best way, either, though it is possible.
Another way that could work would be to preprocess this query--meaning to set a trigger on the Category table that rewrites a VIEW to contain all the extant categories that exist. This does solve a lot of the other problems I've mentioned, but again, using the presentation layer is best.
Note: If your column names (that were formerly values) are numbers or begin with a number, you must quote them with square brackets as in PIVOT (Max(Value) FOR CategoryId IN ([1], [2], [3], [4])) P. Alternately, you can modify the values before they get to the PIVOT part of the query to prepend some letters, so that the column list doesn't need escaping. For further reading on this check out the rules for identifiers in SQL Server.
qid & accept id:
(17099089, 17099191)
query:
MySQL query to append key:value to JSON string
soup:
What about this
\nUPDATE table SET table_field1 = CONCAT(table_field1,' This will be added.');\n
\nEDIT:
\nI personally would have done the manipulation with a language like PHP before inserting it. Much easier. Anyway, Ok is this what you want? This should work providing your json format that is being added is in the format {'key':'value'}
\n UPDATE table\n SET col = CONCAT_WS(",", SUBSTRING(col, 1, CHAR_LENGTH(col) - 1),SUBSTRING('newjson', 2));\n
\n
soup wrap:
What about this
UPDATE table SET table_field1 = CONCAT(table_field1,' This will be added.');
EDIT:
I personally would have done the manipulation with a language like PHP before inserting it. Much easier. Anyway, Ok is this what you want? This should work providing your json format that is being added is in the format {'key':'value'}
UPDATE table
SET col = CONCAT_WS(",", SUBSTRING(col, 1, CHAR_LENGTH(col) - 1),SUBSTRING('newjson', 2));
qid & accept id:
(17099697, 17099862)
query:
Output multiple child record ids to one row
soup:
You can do it using pivot and rank:
\nselect StudentID, [1] as P1, [2] as P2, [3] as P3 from (\n select StudentID, ParentID, RANK() over (PARTITION BY StudentID ORDER BY ParentID) as rnk\n from STUDENT_PARENTS\n) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p\n
\nSee it on SqlFiddle here:
\nhttp://sqlfiddle.com/#!3/e3254/9
\nIf you are using GUIDs it's a little more tricky, you need to cast them to BINARY to use min():
\nselect StudentID, \n cast([1] as uniqueidentifier) as P1, \n cast([2] as uniqueidentifier) as P2, \n cast([3] as uniqueidentifier) as P3 \nfrom (\n select StudentID, cast(ParentID as binary(16)) as ParentID, RANK() over (PARTITION BY StudentID ORDER BY StudentParentID) as rnk\n from STUDENT_PARENTS\n) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p\n
\nSqlFiddle here: http://sqlfiddle.com/#!3/8d0d7/14
\n
soup wrap:
You can do it using pivot and rank:
select StudentID, [1] as P1, [2] as P2, [3] as P3 from (
select StudentID, ParentID, RANK() over (PARTITION BY StudentID ORDER BY ParentID) as rnk
from STUDENT_PARENTS
) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p
See it on SqlFiddle here:
http://sqlfiddle.com/#!3/e3254/9
If you are using GUIDs it's a little more tricky, you need to cast them to BINARY to use min():
select StudentID,
cast([1] as uniqueidentifier) as P1,
cast([2] as uniqueidentifier) as P2,
cast([3] as uniqueidentifier) as P3
from (
select StudentID, cast(ParentID as binary(16)) as ParentID, RANK() over (PARTITION BY StudentID ORDER BY StudentParentID) as rnk
from STUDENT_PARENTS
) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p
SqlFiddle here: http://sqlfiddle.com/#!3/8d0d7/14
qid & accept id:
(17102375, 17102449)
query:
How do I use SQL's JOIN to select column A if column B = column C?
soup:
How about something like
\nSELECT m.username\nFROM members m INNER JOIN\n friends f ON m.id IN (f.user_id,f.friend_id)\nWHERE m.id = $variable\n
\nI noted that the above might return more than 1 entry based on the data in your tables, so here is another example.
\nSELECT m.username\nFROM \nmembers m\nWHERE m.id = 2 \nAND EXISTS (\n SELECT 1 \n FROM friends f \n WHERE m.id IN (f.user_id,f.friend_id)\n )\n
\nSQL Fiddle DEMO
\nThe above example will show you the difference between the 2 statements.
\nThis article has some nice visual representation of joins, and is always handy to have around.
\nIntroduction to JOINs – Basic of JOINs
\n
soup wrap:
How about something like
SELECT m.username
FROM members m INNER JOIN
friends f ON m.id IN (f.user_id,f.friend_id)
WHERE m.id = $variable
I noted that the above might return more than 1 entry based on the data in your tables, so here is another example.
SELECT m.username
FROM
members m
WHERE m.id = 2
AND EXISTS (
SELECT 1
FROM friends f
WHERE m.id IN (f.user_id,f.friend_id)
)
SQL Fiddle DEMO
The above example will show you the difference between the 2 statements.
This article has some nice visual representation of joins, and is always handy to have around.
Introduction to JOINs – Basic of JOINs
qid & accept id:
(17113532, 17113770)
query:
LOAD DATA INFILE into Single Field on MySQL
soup:
There are a couple of ways of dong this, depending on the details of your scenario:
\nLOAD DATA INFILE
\nYou probably want something like this:
\nLOAD DATA LOCAL INFILE '/path/to/file/data_file.csv'\n IGNORE\n INTO TABLE `databasename`.`tablename`\n CHARACTER SET utf8\n FIELDS\n TERMINATED BY '\n'\n OPTIONALLY ENCLOSED BY '"'\n IGNORE 1 LINES\n (column1)\nSHOW WARNINGS;\n
\nThis will import from /path/to/file/data_file.csv into databasename.tablename, with each complete line in the text file being imported into a new row in the table, with all the data from that line being put into the column called column1. More details here.
\nLOAD_FILE
\nOr you could use the LOAD_FILE function, like this:
\nUPDATE table\n SET column1=LOAD_FILE('/path/to/file/data_file.csv')\n WHERE id=1;\n
\nThis will import the contents of the file /path/to/file/data_file.csv and store it in column1 of the row where id=1. More details here. This is mostly intended for loading binary files into BLOB fields, but you can use it to suck a whole text file into a single column in a single row too, if that's what you want.
\nUsing a TEXT Column
\nFor loading large text files, you should use a column of type TEXT - it's can store very large amounts of text with no problems - see here for more details.
\n
soup wrap:
There are a couple of ways of dong this, depending on the details of your scenario:
LOAD DATA INFILE
You probably want something like this:
LOAD DATA LOCAL INFILE '/path/to/file/data_file.csv'
IGNORE
INTO TABLE `databasename`.`tablename`
CHARACTER SET utf8
FIELDS
TERMINATED BY '\n'
OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES
(column1)
SHOW WARNINGS;
This will import from /path/to/file/data_file.csv into databasename.tablename, with each complete line in the text file being imported into a new row in the table, with all the data from that line being put into the column called column1. More details here.
LOAD_FILE
Or you could use the LOAD_FILE function, like this:
UPDATE table
SET column1=LOAD_FILE('/path/to/file/data_file.csv')
WHERE id=1;
This will import the contents of the file /path/to/file/data_file.csv and store it in column1 of the row where id=1. More details here. This is mostly intended for loading binary files into BLOB fields, but you can use it to suck a whole text file into a single column in a single row too, if that's what you want.
Using a TEXT Column
For loading large text files, you should use a column of type TEXT - it's can store very large amounts of text with no problems - see here for more details.
qid & accept id:
(17129510, 17129746)
query:
reuse the auto inserted generated field in another field
soup:
You can achieve your goal generating folio numbers at insertion time using BEFORE INSERT trigger and a separate table (if you don't mind) for sequencing.
\nFirst of all sequencing table
\nCREATE TABLE table1_seq \n (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);\n
\nYour actual table
\nCREATE TABLE Table1\n (`id` INT NOT NULL DEFAULT 0, \n `folio` VARCHAR(32)\n ...\n );\n
\nA trigger
\nDELIMITER $$\nCREATE TRIGGER tg_table1_insert \nBEFORE INSERT ON Table1\nFOR EACH ROW\nBEGIN\n INSERT INTO table1_seq VALUES (NULL);\n SET NEW.id = LAST_INSERT_ID();\n SET NEW.folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(NEW.folio), NEW.id);\nEND$$\nDELIMITER ;\n
\nNow you can insert a new record
\nINSERT INTO Table1 (`folio`, ...)\nVALUES ('a', ...), ('e', ...);\n
\nAnd you'll have in your table1
\n\n| ID | FOLIO |...\n-----------------...\n| 1 | 160613A1 |...\n| 2 | 160613E2 |...\n
\nHere is SQLFiddle demo.
\nAnother way is just to wrap your INSERT and UPDATE in a stored procedure
\nDELIMITER $$\nCREATE PROCEDURE sp_table1_insert (IN folio_type VARCHAR(1), ...)\nBEGIN\n DECLARE newid INT DEFAULT 0;\n START TRANSACTION;\n INSERT INTO table1 (id, ...) VALUES (NULL, ...);\n SET newid = LAST_INSERT_ID();\n UPDATE table1 \n SET folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(folio_type), newid)\n WHERE id = newid;\n COMMIT;\nEND$$\nDELIMITER ;\n
\nAnd then insert new records using this stored procedure
\nCALL sp_table1_insert ('a',...);\nCALL sp_table1_insert ('e',...);\n
\nHere is SQLFiddle demo for that.
\n
soup wrap:
You can achieve your goal generating folio numbers at insertion time using BEFORE INSERT trigger and a separate table (if you don't mind) for sequencing.
First of all sequencing table
CREATE TABLE table1_seq
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
Your actual table
CREATE TABLE Table1
(`id` INT NOT NULL DEFAULT 0,
`folio` VARCHAR(32)
...
);
A trigger
DELIMITER $$
CREATE TRIGGER tg_table1_insert
BEFORE INSERT ON Table1
FOR EACH ROW
BEGIN
INSERT INTO table1_seq VALUES (NULL);
SET NEW.id = LAST_INSERT_ID();
SET NEW.folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(NEW.folio), NEW.id);
END$$
DELIMITER ;
Now you can insert a new record
INSERT INTO Table1 (`folio`, ...)
VALUES ('a', ...), ('e', ...);
And you'll have in your table1
| ID | FOLIO |...
-----------------...
| 1 | 160613A1 |...
| 2 | 160613E2 |...
Here is SQLFiddle demo.
Another way is just to wrap your INSERT and UPDATE in a stored procedure
DELIMITER $$
CREATE PROCEDURE sp_table1_insert (IN folio_type VARCHAR(1), ...)
BEGIN
DECLARE newid INT DEFAULT 0;
START TRANSACTION;
INSERT INTO table1 (id, ...) VALUES (NULL, ...);
SET newid = LAST_INSERT_ID();
UPDATE table1
SET folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(folio_type), newid)
WHERE id = newid;
COMMIT;
END$$
DELIMITER ;
And then insert new records using this stored procedure
CALL sp_table1_insert ('a',...);
CALL sp_table1_insert ('e',...);
Here is SQLFiddle demo for that.
qid & accept id:
(17163648, 17183754)
query:
How to exclude holidays between two dates?
soup:
Here is an even better and efficient solution to the problem,
\nSELECT A.ID,\nCOUNT(A.ID) AS COUNTED\nFROM tableA A\nLEFT JOIN TableB B\nON A.tableB_id=B.id\nLEFT JOIN holiday C\nON TRUNC(C.hdate) BETWEEN (TRUNC(a.date1) +1) AND TRUNC(B.date2)\nWHERE c.hdate IS NOT NULL\nGROUP BY A.ID;\n
\nwhere TableA contains date1 and tableB contains date2. Holiday contains the list of holidays and Sundays. And this query excludes 'date1' from the count.
\nRESULT LOGIC
\ntrunc(date2) - trunc(date1) = x \nx - result of the query\n
\n
soup wrap:
Here is an even better and efficient solution to the problem,
SELECT A.ID,
COUNT(A.ID) AS COUNTED
FROM tableA A
LEFT JOIN TableB B
ON A.tableB_id=B.id
LEFT JOIN holiday C
ON TRUNC(C.hdate) BETWEEN (TRUNC(a.date1) +1) AND TRUNC(B.date2)
WHERE c.hdate IS NOT NULL
GROUP BY A.ID;
where TableA contains date1 and tableB contains date2. Holiday contains the list of holidays and Sundays. And this query excludes 'date1' from the count.
RESULT LOGIC
trunc(date2) - trunc(date1) = x
x - result of the query
qid & accept id:
(17255338, 17256047)
query:
escape entire column if all of that column's fields are null (or zero)
soup:
Technically you can do that with dynamic SQL, but whether you have to proceed with this approach is very questionable.
\nDELIMITER $$\nCREATE PROCEDURE sp_select_not_empty(IN tbl_name VARCHAR(64))\nBEGIN\n SET @sql = NULL, @cols = NULL;\n SELECT\n GROUP_CONCAT(\n CONCAT(\n 'SELECT ''',\n column_name,\n ''' name, COUNT(NULLIF(',\n column_name, ', ', \n CASE WHEN data_type IN('int', 'decimal') THEN 0 WHEN data_type IN('varchar', 'char') THEN '''''' END,\n ')) n FROM ',\n tbl_name\n )\n SEPARATOR ' UNION ALL ') INTO @sql\n FROM INFORMATION_SCHEMA.COLUMNS \n WHERE table_name = tbl_name;\n\n SET @sql = CONCAT(\n 'SELECT GROUP_CONCAT(name) INTO @cols FROM (', \n @sql, \n ') q WHERE q.n > 0'\n );\n PREPARE stmt FROM @sql;\n EXECUTE stmt;\n\n SET @sql = CONCAT('SELECT ', @cols, ' FROM ', @tbl);\n PREPARE stmt FROM @sql;\n EXECUTE stmt;\n DEALLOCATE PREPARE stmt;\nEND$$\nDELIMITER ;\n
\nNow calling our procedure
\nCALL sp_select_not_empty('Table1');\n
\nAnd we get
\n\n+------+--------+--------+\n| id | value1 | value3 |\n+------+--------+--------+\n| 1 | 3 | A |\n| 2 | 5 | B |\n| 3 | 0 | C |\n| 4 | 9 | D |\n| 5 | 7 | NULL |\n| 6 | 9 | E |\n+------+--------+--------+\n
\n
soup wrap:
Technically you can do that with dynamic SQL, but whether you have to proceed with this approach is very questionable.
DELIMITER $$
CREATE PROCEDURE sp_select_not_empty(IN tbl_name VARCHAR(64))
BEGIN
SET @sql = NULL, @cols = NULL;
SELECT
GROUP_CONCAT(
CONCAT(
'SELECT ''',
column_name,
''' name, COUNT(NULLIF(',
column_name, ', ',
CASE WHEN data_type IN('int', 'decimal') THEN 0 WHEN data_type IN('varchar', 'char') THEN '''''' END,
')) n FROM ',
tbl_name
)
SEPARATOR ' UNION ALL ') INTO @sql
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = tbl_name;
SET @sql = CONCAT(
'SELECT GROUP_CONCAT(name) INTO @cols FROM (',
@sql,
') q WHERE q.n > 0'
);
PREPARE stmt FROM @sql;
EXECUTE stmt;
SET @sql = CONCAT('SELECT ', @cols, ' FROM ', @tbl);
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Now calling our procedure
CALL sp_select_not_empty('Table1');
And we get
+------+--------+--------+
| id | value1 | value3 |
+------+--------+--------+
| 1 | 3 | A |
| 2 | 5 | B |
| 3 | 0 | C |
| 4 | 9 | D |
| 5 | 7 | NULL |
| 6 | 9 | E |
+------+--------+--------+
qid & accept id:
(17265080, 17271468)
query:
Storing app preferences in Spring app
soup:
We use an approach with default values and a generic GUI. Therefore we use a property file that contains the default value as well as type information for every key. in the dayabdatabase we store only this values that have been modified by the user. The database schema is just a simple key value table. The key is the same like the one from the property file, the value is of type string, because we have to parse the default value anyway. The type info (int, positiveInt, boolean, string, text, html) from the propty file is used by the generic GUI to have the right input for every key.
\nExample:
\ndefault.properties
\nmy.example.value=1\nmy.example.type=into\n
\ndefault.properties_en
\nmy.example.title=Example Value\nmy.example.descruption=This is..\n
\nDb:\nKey=string(256)\nValue=string(2048)
\n
soup wrap:
We use an approach with default values and a generic GUI. Therefore we use a property file that contains the default value as well as type information for every key. in the dayabdatabase we store only this values that have been modified by the user. The database schema is just a simple key value table. The key is the same like the one from the property file, the value is of type string, because we have to parse the default value anyway. The type info (int, positiveInt, boolean, string, text, html) from the propty file is used by the generic GUI to have the right input for every key.
Example:
default.properties
my.example.value=1
my.example.type=into
default.properties_en
my.example.title=Example Value
my.example.descruption=This is..
Db:
Key=string(256)
Value=string(2048)
qid & accept id:
(17325149, 17325624)
query:
SQL query get lowest value from related record, subquery
soup:
You need to use an additional subquery to find out what the minimum radius is per mechanic (where the radius is greater than the distance), and then you can join this back to your two tables and get all the column information you need from the two tables:
\nSELECT m.ID, mz.Zone, m.distance, mz.radius\nFROM Mechanics m\n INNER JOIN mechanic_zones mz\n ON mz.Mechanic_ID = m.ID\n INNER JOIN\n ( SELECT m.ID, \n MIN(mz.radius) AS radius\n FROM Mechanics m\n INNER JOIN mechanic_zones mz\n ON mz.Mechanic_ID = m.ID\n WHERE mz.radius > M.distance\n GROUP BY m.ID\n ) MinZone\n ON MinZone.ID = m.ID\n AND MinZone.radius= mz.radius\nORDER BY mz.Zone;\n
\n\nIf you don't actually want to know the radius of the selected zone, and the zone with the lowest radius will always have the lowest letter you can just use:
\nSELECT m.ID, mz.MinZone, m.distance\nFROM Mechanics m\n INNER JOIN\n ( SELECT m.ID, \n MIN(mz.Zone) AS Zone\n FROM Mechanics m\n INNER JOIN mechanic_zones mz\n ON mz.Mechanic_ID = m.ID\n WHERE mz.radius > M.distance\n GROUP BY m.ID\n ) MinZone\n ON MinZone.ID = m.ID\nORDER BY MinZone.Zone;\n
\n\nEDIT
\nYour fiddle is very close to what I would use, but I would use the following so that the calculation is only done once:
\nSELECT m.id, m.name, m.distance, m.radius, m.zone\nFROM ( SELECT m.ID, \n m.Name,\n m.Distance,\n MIN(mz.radius) AS radius\n FROM ( SELECT ID, Name, (1 * Distance) AS Distance\n FROM Mechanics \n ) m\n INNER JOIN mechanic_zones mz\n ON mz.Mechanic_ID = m.ID\n WHERE mz.radius > M.distance\n GROUP BY m.ID, m.Name, m.Distance\n ) m\n INNER JOIN mechanic_zones mz\n ON mz.Mechanic_ID = m.ID\n AND mz.radius = m.radius;\n
\n\nThe reasoning behind this that your query has columns in the select list and not in a group by, so there is no guarantee that the radius returned will be lowest one. For example if you change the order in which the records are inserted to mechanic_zones (as in this fiddle) you results become:
\nID NAME DTJ RADIUS ZONE\n1 Jon 2 10 a\n2 Paul 11 50 b\n3 George 5 5 a\n
\nInstead of
\nID NAME DTJ RADIUS ZONE\n1 Jon 2 5 a\n2 Paul 11 20 b\n3 George 5 5 a\n
\nAs you can see the radius for Jon is wrong. To explain this further below is an extract of an explanation I have written about the short comings of MySQL's implentation of implicit grouping.
\n
\nI would advise to avoid the implicit grouping offered by MySQL where possible, by this i mean including columns in the select list, even though they are not contained in an aggregate function or the group by clause.
\nImagine the following simple table (T):
\nID | Column1 | Column2 |\n----|---------+----------|\n1 | A | X |\n2 | A | Y |\n
\nIn MySQL you can write
\nSELECT ID, Column1, Column2\nFROM T\nGROUP BY Column1;\n
\nThis actually breaks the SQL Standard, but it works in MySQL, however the trouble is it is non-deterministic, the result:
\nID | Column1 | Column2 |\n----|---------+----------|\n1 | A | X |\n
\nIs no more or less correct than
\nID | Column1 | Column2 | \n----|---------+----------|\n2 | A | Y |\n
\nSo what you are saying is give me one row for each distinct value of Column1, which both results sets satisfy, so how do you know which one you will get? Well you don't, it seems to be a fairly popular misconception that you can add and ORDER BY clause to influence the results, so for example the following query:
\nSELECT ID, Column1, Column2\nFROM T\nGROUP BY Column1\nORDER BY ID DESC;\n
\nWould ensure that you get the following result:
\nID | Column1 | Column2 | \n----|---------+----------|\n2 | A | Y |\n
\nbecause of the ORDER BY ID DESC, however this is not true (as demonstrated here).
\nThe MMySQL documents state:
\n\nThe server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause.
\n
\nSo even though you have an order by this does not apply until after one row per group has been selected, and this one row is non-determistic.
\nThe SQL-Standard does allow columns in the select list not contained in the GROUP BY or an aggregate function, however these columns must be functionally dependant on a column in the GROUP BY. For example, ID in the sample table is the PRIMARY KEY, so we know it is unique in the table, so the following query conforms to the SQL standard and would run in MySQL and fail in many DBMS currently (At the time of writing Postgresql is the closest DBMS I know of to correctly implementing the standard):
\nSELECT ID, Column1, Column2\nFROM T\nGROUP BY ID;\n
\nSince ID is unique for each row, there can only be one value of Column1 for each ID, one value of Column2 there is no ambiguity about what to return for each row.
\n
soup wrap:
You need to use an additional subquery to find out what the minimum radius is per mechanic (where the radius is greater than the distance), and then you can join this back to your two tables and get all the column information you need from the two tables:
SELECT m.ID, mz.Zone, m.distance, mz.radius
FROM Mechanics m
INNER JOIN mechanic_zones mz
ON mz.Mechanic_ID = m.ID
INNER JOIN
( SELECT m.ID,
MIN(mz.radius) AS radius
FROM Mechanics m
INNER JOIN mechanic_zones mz
ON mz.Mechanic_ID = m.ID
WHERE mz.radius > M.distance
GROUP BY m.ID
) MinZone
ON MinZone.ID = m.ID
AND MinZone.radius= mz.radius
ORDER BY mz.Zone;
If you don't actually want to know the radius of the selected zone, and the zone with the lowest radius will always have the lowest letter you can just use:
SELECT m.ID, mz.MinZone, m.distance
FROM Mechanics m
INNER JOIN
( SELECT m.ID,
MIN(mz.Zone) AS Zone
FROM Mechanics m
INNER JOIN mechanic_zones mz
ON mz.Mechanic_ID = m.ID
WHERE mz.radius > M.distance
GROUP BY m.ID
) MinZone
ON MinZone.ID = m.ID
ORDER BY MinZone.Zone;
EDIT
Your fiddle is very close to what I would use, but I would use the following so that the calculation is only done once:
SELECT m.id, m.name, m.distance, m.radius, m.zone
FROM ( SELECT m.ID,
m.Name,
m.Distance,
MIN(mz.radius) AS radius
FROM ( SELECT ID, Name, (1 * Distance) AS Distance
FROM Mechanics
) m
INNER JOIN mechanic_zones mz
ON mz.Mechanic_ID = m.ID
WHERE mz.radius > M.distance
GROUP BY m.ID, m.Name, m.Distance
) m
INNER JOIN mechanic_zones mz
ON mz.Mechanic_ID = m.ID
AND mz.radius = m.radius;
The reasoning behind this that your query has columns in the select list and not in a group by, so there is no guarantee that the radius returned will be lowest one. For example if you change the order in which the records are inserted to mechanic_zones (as in this fiddle) you results become:
ID NAME DTJ RADIUS ZONE
1 Jon 2 10 a
2 Paul 11 50 b
3 George 5 5 a
Instead of
ID NAME DTJ RADIUS ZONE
1 Jon 2 5 a
2 Paul 11 20 b
3 George 5 5 a
As you can see the radius for Jon is wrong. To explain this further below is an extract of an explanation I have written about the short comings of MySQL's implentation of implicit grouping.
I would advise to avoid the implicit grouping offered by MySQL where possible, by this i mean including columns in the select list, even though they are not contained in an aggregate function or the group by clause.
Imagine the following simple table (T):
ID | Column1 | Column2 |
----|---------+----------|
1 | A | X |
2 | A | Y |
In MySQL you can write
SELECT ID, Column1, Column2
FROM T
GROUP BY Column1;
This actually breaks the SQL Standard, but it works in MySQL, however the trouble is it is non-deterministic, the result:
ID | Column1 | Column2 |
----|---------+----------|
1 | A | X |
Is no more or less correct than
ID | Column1 | Column2 |
----|---------+----------|
2 | A | Y |
So what you are saying is give me one row for each distinct value of Column1, which both results sets satisfy, so how do you know which one you will get? Well you don't, it seems to be a fairly popular misconception that you can add and ORDER BY clause to influence the results, so for example the following query:
SELECT ID, Column1, Column2
FROM T
GROUP BY Column1
ORDER BY ID DESC;
Would ensure that you get the following result:
ID | Column1 | Column2 |
----|---------+----------|
2 | A | Y |
because of the ORDER BY ID DESC, however this is not true (as demonstrated here).
The MMySQL documents state:
The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause.
So even though you have an order by this does not apply until after one row per group has been selected, and this one row is non-determistic.
The SQL-Standard does allow columns in the select list not contained in the GROUP BY or an aggregate function, however these columns must be functionally dependant on a column in the GROUP BY. For example, ID in the sample table is the PRIMARY KEY, so we know it is unique in the table, so the following query conforms to the SQL standard and would run in MySQL and fail in many DBMS currently (At the time of writing Postgresql is the closest DBMS I know of to correctly implementing the standard):
SELECT ID, Column1, Column2
FROM T
GROUP BY ID;
Since ID is unique for each row, there can only be one value of Column1 for each ID, one value of Column2 there is no ambiguity about what to return for each row.
qid & accept id:
(17340363, 17443175)
query:
Replacing Text which does not match a pattern in Oracle
soup:
The above solutions didn't work and below is what I did.
\nupdate temp_table set col2=regexp_replace(col2,'([0-9]{10},[a-z0-9]+)','(\1)') ;\nupdate temp_table set col2=regexp_replace(col2,'\),[\s\S]*~\(','(\1)$');\nupdate temp_table set col2=regexp_replace(col2,'\).*?\(','$');\nupdate temp_table set col2=replace(regexp_replace(col2,'\).*',''),'(','');\n
\nAfter these 4 update commands, the col2 will have something like
\n1 1331882981,ab123456$1331890329,pqr123223\n2 1331882981,abc333$1331890329,pqrs23\n
\nThen I wrote a function to split this thing. The reason I went for the function is to split by "$" and the fact that the col2 still has >10k characters
\ncreate or replace function parse( p_clob in clob ) return sys.odciVarchar2List\npipelined\nas\n l_offset number := 1;\n l_clob clob := translate( p_clob, chr(13)|| chr(10) || chr(9), ' ' ) || '$';\n l_hit number;\nbegin\n loop\n --Find occurance of "$" from l_offset\n l_hit := instr( l_clob, '$', l_offset );\n exit when nvl(l_hit,0) = 0;\n --Extract string from l_offset to l_hit\n pipe row ( substr(l_clob, l_offset , (l_hit - l_offset)) );\n --Move offset\n l_offset := l_hit+1;\n end loop;\nend;\n
\nI then called
\nselect col1,\n REGEXP_SUBSTR(column_value, '[^,]+', 1, 1) col3,\n REGEXP_SUBSTR(column_value, '[^,]+', 1, 2) col4\n from temp_table, table(parse(temp_table.col2));\n
\n
soup wrap:
The above solutions didn't work and below is what I did.
update temp_table set col2=regexp_replace(col2,'([0-9]{10},[a-z0-9]+)','(\1)') ;
update temp_table set col2=regexp_replace(col2,'\),[\s\S]*~\(','(\1)$');
update temp_table set col2=regexp_replace(col2,'\).*?\(','$');
update temp_table set col2=replace(regexp_replace(col2,'\).*',''),'(','');
After these 4 update commands, the col2 will have something like
1 1331882981,ab123456$1331890329,pqr123223
2 1331882981,abc333$1331890329,pqrs23
Then I wrote a function to split this thing. The reason I went for the function is to split by "$" and the fact that the col2 still has >10k characters
create or replace function parse( p_clob in clob ) return sys.odciVarchar2List
pipelined
as
l_offset number := 1;
l_clob clob := translate( p_clob, chr(13)|| chr(10) || chr(9), ' ' ) || '$';
l_hit number;
begin
loop
--Find occurance of "$" from l_offset
l_hit := instr( l_clob, '$', l_offset );
exit when nvl(l_hit,0) = 0;
--Extract string from l_offset to l_hit
pipe row ( substr(l_clob, l_offset , (l_hit - l_offset)) );
--Move offset
l_offset := l_hit+1;
end loop;
end;
I then called
select col1,
REGEXP_SUBSTR(column_value, '[^,]+', 1, 1) col3,
REGEXP_SUBSTR(column_value, '[^,]+', 1, 2) col4
from temp_table, table(parse(temp_table.col2));
qid & accept id:
(17352572, 17353504)
query:
SQL Server - Setting multiple columns from another table
soup:
First off, I strongly suggest you look into an alternative. This will get messy very fast, as you're essentially treating rows as columns. It doesn't help much that Table1 is already denormalized - though if it really only has 3 columns, it's not that big of a deal to normalize it again.:
\nCREATE VIEW v_Table1 AS\n SELECT Id, Code1 as Code FROM Table1\n UNION SELECT Id, Code2 as Code FROM Table1\n UNION SELECT Id, Code3 as Code FROM Table1\n
\nIf we take you second query, it appears you want all possible combinations of ID and Category, and a boolean of whether that combination appears in Table2 (using Code to get back to ID in Table1).
\nSince there doesn't appear to be a canonical list of ID and Category, we'll generate it:
\nCREATE VIEW v_AllCategories AS\n SELECT DISTINCT ID, Category FROM v_Table1 CROSS JOIN Table2\n
\nGetting the list of represented ID and Category is pretty straightforward:
\nCREATE VIEW v_ReportedCategories AS\n SELECT DISTINCT ID, Category FROM Table2 \n JOIN v_Table1 ON Table2.Code = v_Table1.Code\n
\nPut those together, and we can then get the bool to tell us which exists:
\nCREATE VIEW v_CategoryReports AS\n SELECT\n T1.ID, T1.Category, CASE WHEN T2.ID IS NULL THEN 0 ELSE 1 END as Reported\n FROM v_AllCategories as T1\n LEFT OUTER JOIN v_ReportedCategories as T2 ON\n T1.ID = T2.ID\n AND T1.Category = T2.Category\n
\nThat gets you your answer in a normalized form:
\nID | Category | Reported\n10 | cat1 | 1\n10 | cat2 | 1\n10 | cat3 | 0 \n
\nFrom there, you'd need to do a PIVOT to get your Category values as columns:
\nSELECT\n ID,\n cat1,\n cat2,\n cat3\nFROM v_CategoryReports\nPIVOT (\n MAX([Reported]) FOR Category IN ([cat1], [cat2], [cat3])\n) p\n
\nSince you mentioned over 50 'Categories', I'll assume they're not really 'cat1' - 'cat50'. In which case, you'll need to code gen the pivot operation.
\nSqlFiddle with a self-contained example.
\n
soup wrap:
First off, I strongly suggest you look into an alternative. This will get messy very fast, as you're essentially treating rows as columns. It doesn't help much that Table1 is already denormalized - though if it really only has 3 columns, it's not that big of a deal to normalize it again.:
CREATE VIEW v_Table1 AS
SELECT Id, Code1 as Code FROM Table1
UNION SELECT Id, Code2 as Code FROM Table1
UNION SELECT Id, Code3 as Code FROM Table1
If we take you second query, it appears you want all possible combinations of ID and Category, and a boolean of whether that combination appears in Table2 (using Code to get back to ID in Table1).
Since there doesn't appear to be a canonical list of ID and Category, we'll generate it:
CREATE VIEW v_AllCategories AS
SELECT DISTINCT ID, Category FROM v_Table1 CROSS JOIN Table2
Getting the list of represented ID and Category is pretty straightforward:
CREATE VIEW v_ReportedCategories AS
SELECT DISTINCT ID, Category FROM Table2
JOIN v_Table1 ON Table2.Code = v_Table1.Code
Put those together, and we can then get the bool to tell us which exists:
CREATE VIEW v_CategoryReports AS
SELECT
T1.ID, T1.Category, CASE WHEN T2.ID IS NULL THEN 0 ELSE 1 END as Reported
FROM v_AllCategories as T1
LEFT OUTER JOIN v_ReportedCategories as T2 ON
T1.ID = T2.ID
AND T1.Category = T2.Category
That gets you your answer in a normalized form:
ID | Category | Reported
10 | cat1 | 1
10 | cat2 | 1
10 | cat3 | 0
From there, you'd need to do a PIVOT to get your Category values as columns:
SELECT
ID,
cat1,
cat2,
cat3
FROM v_CategoryReports
PIVOT (
MAX([Reported]) FOR Category IN ([cat1], [cat2], [cat3])
) p
Since you mentioned over 50 'Categories', I'll assume they're not really 'cat1' - 'cat50'. In which case, you'll need to code gen the pivot operation.
SqlFiddle with a self-contained example.
qid & accept id:
(17524409, 17524573)
query:
Compare two sets in MySQL for equality
soup:
WHERE language IN('x','y') GROUP BY emp_id HAVING COUNT (*) = 2 \n
\n(where '2' is the number of items in the IN clause)
\nSo your whole query could be:
\nSELECT e.emp_Id\n , e.Name\n FROM Employee e\n JOIN Employee_Language l\n ON e.emp_id = l.emp_id\n WHERE l.Language IN('English', 'French')\n GROUP \n BY e.emp_id \nHAVING COUNT(*) = 2\n
\nSee this SQLFiddle
\n
soup wrap:
WHERE language IN('x','y') GROUP BY emp_id HAVING COUNT (*) = 2
(where '2' is the number of items in the IN clause)
So your whole query could be:
SELECT e.emp_Id
, e.Name
FROM Employee e
JOIN Employee_Language l
ON e.emp_id = l.emp_id
WHERE l.Language IN('English', 'French')
GROUP
BY e.emp_id
HAVING COUNT(*) = 2
See this SQLFiddle
qid & accept id:
(17535389, 17535893)
query:
MySQL create temporary fields with values from another table
soup:
I built a schema based on your image. This is what I came up with:
\nSELECT\n a.id,\n a.first_name,\n a.surname,\n if (b1.type is null, '', 'on') as A1,\n if (b2.type is null, '', 'on') as A2,\n if (b3.type is null, '', 'on') as A3\nFROM `a`\n LEFT JOIN `b` as b1 ON a.id = b1.uid AND b1.type = 1 AND b1.status = 'accepted'\n LEFT JOIN `b` as b2 ON a.id = b2.uid AND b2.type = 2 AND b2.status = 'accepted'\n LEFT JOIN `b` as b3 ON a.id = b3.uid AND b3.type = 3 AND b3.status = 'accepted'\nGROUP BY a.id;\n
\nResult:
\n+----+------------+-----------+----+----+----+\n| id | first_name | surname | A1 | A2 | A3 |\n+----+------------+-----------+----+----+----+\n| 1 | john | smith | on | | |\n| 2 | david | russel | on | on | |\n| 3 | james | duncan | on | | on |\n| 4 | gavin | dow | on | on | |\n+----+------------+-----------+----+----+----+\n
\nHere's the data I used:
\n--\n-- Table structure for table `a`\n--\n\nCREATE TABLE IF NOT EXISTS `a` (\n `id` int(10) unsigned NOT NULL,\n `first_name` varchar(32) NOT NULL,\n `surname` varchar(32) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `a`\n--\n\nINSERT INTO `a` (`id`, `first_name`, `surname`) VALUES\n(1, 'john', 'smith'),\n(2, 'david', 'russel'),\n(3, 'james', 'duncan'),\n(4, 'gavin', 'dow');\n\n--\n-- Table structure for table `b`\n--\n\nCREATE TABLE IF NOT EXISTS `b` (\n `id` int(10) unsigned NOT NULL,\n `uid` int(10) unsigned NOT NULL,\n `type` int(10) NOT NULL,\n `status` varchar(32) NOT NULL,\n PRIMARY KEY (`id`),\n KEY `uid` (`uid`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `b`\n--\n\nINSERT INTO `b` (`id`, `uid`, `type`, `status`) VALUES\n(1, 1, 1, 'accepted'),\n(2, 2, 1, 'accepted'),\n(3, 2, 2, 'accepted'),\n(4, 4, 1, 'accepted'),\n(5, 4, 2, 'accepted'),\n(6, 4, 3, 'declined'),\n(7, 3, 1, 'accepted'),\n(8, 3, 2, 'declined'),\n(9, 1, 2, 'declined'),\n(10, 3, 3, 'accepted');\n
\n
soup wrap:
I built a schema based on your image. This is what I came up with:
SELECT
a.id,
a.first_name,
a.surname,
if (b1.type is null, '', 'on') as A1,
if (b2.type is null, '', 'on') as A2,
if (b3.type is null, '', 'on') as A3
FROM `a`
LEFT JOIN `b` as b1 ON a.id = b1.uid AND b1.type = 1 AND b1.status = 'accepted'
LEFT JOIN `b` as b2 ON a.id = b2.uid AND b2.type = 2 AND b2.status = 'accepted'
LEFT JOIN `b` as b3 ON a.id = b3.uid AND b3.type = 3 AND b3.status = 'accepted'
GROUP BY a.id;
Result:
+----+------------+-----------+----+----+----+
| id | first_name | surname | A1 | A2 | A3 |
+----+------------+-----------+----+----+----+
| 1 | john | smith | on | | |
| 2 | david | russel | on | on | |
| 3 | james | duncan | on | | on |
| 4 | gavin | dow | on | on | |
+----+------------+-----------+----+----+----+
Here's the data I used:
--
-- Table structure for table `a`
--
CREATE TABLE IF NOT EXISTS `a` (
`id` int(10) unsigned NOT NULL,
`first_name` varchar(32) NOT NULL,
`surname` varchar(32) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
--
-- Dumping data for table `a`
--
INSERT INTO `a` (`id`, `first_name`, `surname`) VALUES
(1, 'john', 'smith'),
(2, 'david', 'russel'),
(3, 'james', 'duncan'),
(4, 'gavin', 'dow');
--
-- Table structure for table `b`
--
CREATE TABLE IF NOT EXISTS `b` (
`id` int(10) unsigned NOT NULL,
`uid` int(10) unsigned NOT NULL,
`type` int(10) NOT NULL,
`status` varchar(32) NOT NULL,
PRIMARY KEY (`id`),
KEY `uid` (`uid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
--
-- Dumping data for table `b`
--
INSERT INTO `b` (`id`, `uid`, `type`, `status`) VALUES
(1, 1, 1, 'accepted'),
(2, 2, 1, 'accepted'),
(3, 2, 2, 'accepted'),
(4, 4, 1, 'accepted'),
(5, 4, 2, 'accepted'),
(6, 4, 3, 'declined'),
(7, 3, 1, 'accepted'),
(8, 3, 2, 'declined'),
(9, 1, 2, 'declined'),
(10, 3, 3, 'accepted');
qid & accept id:
(17558290, 17558698)
query:
Performing a simple search in MySQL db with variable amount of input
soup:
Using keyword search predicates LIKE '%pattern%' is a sure way to cause poor performance, because it forces a table-scan.
\nThe best way to do a relational division query, that is to match only movies where all three criteria are matched, is to find individual rows for each of the criteria, and then JOIN them together.
\nSELECT f.*, CONCAT_WS(' ', a1.ambienceName, a2.ambienceName, a3.ambienceName) AS ambiences\nFROM Films AS f \nINNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id \nINNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\nINNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id \nINNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\nINNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id \nINNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id\nWHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?);\n
\nYou'll need an additional JOIN to Films_Ambiences and Ambiences for each search term.
\nYou should have an index on ambienceName, and then all three lookups will be more efficient.
\nALTER TABLE Ambiences ADD KEY (ambienceName);\n
\nI compared different solutions for relational division in a recent presentation:
\n\n- Slides: http://www.slideshare.net/billkarwin/sql-query-patterns-optimized
\n- Webinar recording: http://www.percona.com/webinars/mysql-query-patterns-optimized
\n
\n
\nRe your comment:
\n\nIs there a way to alter this query so that it also displays the rest of the ambiences after the criteria are found?
\n
\nYes, but you have to join one more time to get the full set of ambiences for the film:
\nSELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences\nFROM Films AS f \nINNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id \nINNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\nINNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id \nINNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\nINNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id \nINNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id\nINNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id\nINNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id\nWHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?)\nGROUP BY f.id;\n
\n\nis there a way to alter this query so that the result are only films that have the ambiences required but no more?
\n
\nThe query above should do that.
\n
\n\nWhat the query does, I think, is to look for films that include the given ambiences (so it also find films that have more ambiences).
\n
\nRight, the query does not match a film unless it matches all three ambiences in the search criteria. But the film may have other ambiences beyond those in the search criteria, and all of the film's ambiences (those in the search criteria plus others) are collected as GROUP_CONCAT(a_all.ambienceName).
\nI tested this example:
\nmysql> INSERT INTO Ambiences (ambienceName) \n VALUES ('funny'), ('scary'), ('1950s'), ('London'), ('bank'), ('crime'), ('stupid');\nmysql> INSERT INTO Films (title) \n VALUES ('Mary Poppins'), ('Heist'), ('Scary Movie'), ('Godzilla'), ('Signs');\nmysql> INSERT INTO Films_Ambiences \n VALUES (1,1),(1,2),(1,4),(1,5), (2,1),(2,2),(2,5),(2,6), (3,1),(3,2),(3,7), (4,2),(4,3), (5,2),(5,7);\n\nmysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences \n FROM Films AS f \n INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id \n INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id \n INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id \n INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id \n INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id \n INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id \n INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id \n INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id \n WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = ('funny','scary','bank') \n GROUP BY f.id;\n+----+--------------+-------------------------+\n| id | Title | ambiences |\n+----+--------------+-------------------------+\n| 1 | Mary Poppins | funny,scary,London,bank |\n| 2 | Heist | funny,scary,bank,crime |\n+----+--------------+-------------------------+\n
\nBy the way, here's the EXPLAIN showing usage of indexes:
\n+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |\n+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n| 1 | SIMPLE | a1 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index; Using temporary; Using filesort |\n| 1 | SIMPLE | a2 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index |\n| 1 | SIMPLE | a3 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index |\n| 1 | SIMPLE | fa1 | ref | PRIMARY,ambience_id | ambience_id | 4 | test.a1.id | 1 | Using index |\n| 1 | SIMPLE | f | eq_ref | PRIMARY | PRIMARY | 4 | test.fa1.film_id | 1 | NULL |\n| 1 | SIMPLE | fa2 | eq_ref | PRIMARY,ambience_id | PRIMARY | 8 | test.fa1.film_id,test.a2.id | 1 | Using index |\n| 1 | SIMPLE | fa3 | eq_ref | PRIMARY,ambience_id | PRIMARY | 8 | test.fa1.film_id,test.a3.id | 1 | Using index |\n| 1 | SIMPLE | fa_all | ref | PRIMARY,ambience_id | PRIMARY | 4 | test.fa1.film_id | 1 | Using index |\n| 1 | SIMPLE | a_all | eq_ref | PRIMARY | PRIMARY | 4 | test.fa_all.ambience_id | 1 | NULL |\n+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n
\n
\n\nI have a film1 which is scary, funny, stupid. When I search for a film which is only scary, stupid I will get film1 anyway. What if I dont want that?
\n
\nOh, okay, I totally didn't understand that was what you meant, and it's an unusual requirement in these types of problems.
\nHere's a solution:
\nmysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences\n FROM Films AS f\n INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id\n INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\n INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id\n INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\n INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id\n WHERE (a1.ambienceName, a2.ambienceName) = ('scary','stupid')\n GROUP BY f.id\n HAVING COUNT(*) = 2\n+----+-------+--------------+\n| id | Title | ambiences |\n+----+-------+--------------+\n| 5 | Signs | scary,stupid |\n+----+-------+--------------+\n
\nThere's no need to join to a_all in this case, because we don't need the list of ambiences names, we only need the count of ambiences, which we can get just by joining to fa_all.
\n
soup wrap:
Using keyword search predicates LIKE '%pattern%' is a sure way to cause poor performance, because it forces a table-scan.
The best way to do a relational division query, that is to match only movies where all three criteria are matched, is to find individual rows for each of the criteria, and then JOIN them together.
SELECT f.*, CONCAT_WS(' ', a1.ambienceName, a2.ambienceName, a3.ambienceName) AS ambiences
FROM Films AS f
INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id
INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id
INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id
INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id
WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?);
You'll need an additional JOIN to Films_Ambiences and Ambiences for each search term.
You should have an index on ambienceName, and then all three lookups will be more efficient.
ALTER TABLE Ambiences ADD KEY (ambienceName);
I compared different solutions for relational division in a recent presentation:
- Slides: http://www.slideshare.net/billkarwin/sql-query-patterns-optimized
- Webinar recording: http://www.percona.com/webinars/mysql-query-patterns-optimized
Re your comment:
Is there a way to alter this query so that it also displays the rest of the ambiences after the criteria are found?
Yes, but you have to join one more time to get the full set of ambiences for the film:
SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences
FROM Films AS f
INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id
INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id
INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id
INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id
INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id
INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id
WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?)
GROUP BY f.id;
is there a way to alter this query so that the result are only films that have the ambiences required but no more?
The query above should do that.
What the query does, I think, is to look for films that include the given ambiences (so it also find films that have more ambiences).
Right, the query does not match a film unless it matches all three ambiences in the search criteria. But the film may have other ambiences beyond those in the search criteria, and all of the film's ambiences (those in the search criteria plus others) are collected as GROUP_CONCAT(a_all.ambienceName).
I tested this example:
mysql> INSERT INTO Ambiences (ambienceName)
VALUES ('funny'), ('scary'), ('1950s'), ('London'), ('bank'), ('crime'), ('stupid');
mysql> INSERT INTO Films (title)
VALUES ('Mary Poppins'), ('Heist'), ('Scary Movie'), ('Godzilla'), ('Signs');
mysql> INSERT INTO Films_Ambiences
VALUES (1,1),(1,2),(1,4),(1,5), (2,1),(2,2),(2,5),(2,6), (3,1),(3,2),(3,7), (4,2),(4,3), (5,2),(5,7);
mysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences
FROM Films AS f
INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id
INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id
INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id
INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id
INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id
INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id
WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = ('funny','scary','bank')
GROUP BY f.id;
+----+--------------+-------------------------+
| id | Title | ambiences |
+----+--------------+-------------------------+
| 1 | Mary Poppins | funny,scary,London,bank |
| 2 | Heist | funny,scary,bank,crime |
+----+--------------+-------------------------+
By the way, here's the EXPLAIN showing usage of indexes:
+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
| 1 | SIMPLE | a1 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index; Using temporary; Using filesort |
| 1 | SIMPLE | a2 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index |
| 1 | SIMPLE | a3 | ref | PRIMARY,ambienceName | ambienceName | 258 | const | 1 | Using where; Using index |
| 1 | SIMPLE | fa1 | ref | PRIMARY,ambience_id | ambience_id | 4 | test.a1.id | 1 | Using index |
| 1 | SIMPLE | f | eq_ref | PRIMARY | PRIMARY | 4 | test.fa1.film_id | 1 | NULL |
| 1 | SIMPLE | fa2 | eq_ref | PRIMARY,ambience_id | PRIMARY | 8 | test.fa1.film_id,test.a2.id | 1 | Using index |
| 1 | SIMPLE | fa3 | eq_ref | PRIMARY,ambience_id | PRIMARY | 8 | test.fa1.film_id,test.a3.id | 1 | Using index |
| 1 | SIMPLE | fa_all | ref | PRIMARY,ambience_id | PRIMARY | 4 | test.fa1.film_id | 1 | Using index |
| 1 | SIMPLE | a_all | eq_ref | PRIMARY | PRIMARY | 4 | test.fa_all.ambience_id | 1 | NULL |
+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
I have a film1 which is scary, funny, stupid. When I search for a film which is only scary, stupid I will get film1 anyway. What if I dont want that?
Oh, okay, I totally didn't understand that was what you meant, and it's an unusual requirement in these types of problems.
Here's a solution:
mysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences
FROM Films AS f
INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id
INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id
INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id
WHERE (a1.ambienceName, a2.ambienceName) = ('scary','stupid')
GROUP BY f.id
HAVING COUNT(*) = 2
+----+-------+--------------+
| id | Title | ambiences |
+----+-------+--------------+
| 5 | Signs | scary,stupid |
+----+-------+--------------+
There's no need to join to a_all in this case, because we don't need the list of ambiences names, we only need the count of ambiences, which we can get just by joining to fa_all.
qid & accept id:
(17566573, 17566745)
query:
Convert month shortname to month number
soup:
Use STR_TO_DATE() function to convert String to Date like this:
\nSELECT STR_TO_DATE('Apr','%b')\n
\nAnd use MONTH() to get month number from the date like this:
\nSELECT MONTH(STR_TO_DATE('Apr','%b'))\n
\nSee this SQLFiddle
\n
soup wrap:
Use STR_TO_DATE() function to convert String to Date like this:
SELECT STR_TO_DATE('Apr','%b')
And use MONTH() to get month number from the date like this:
SELECT MONTH(STR_TO_DATE('Apr','%b'))
See this SQLFiddle
qid & accept id:
(17596708, 17597540)
query:
SQL Restrict Column values using another Table
soup:
Add a unique constraint to AllowedColors. (And consider dropping the column "ID".)
\nalter table AllowedColors\nadd constraint your_constraint_name\nunique (FamilyID, ColorID);\n
\nYou probably want each of those columns to be declared NOT NULL, too. I'll leave that to you.
\nNow you can use that pair of columns as the target of a foreign key constraint.
\nalter table fruit\nadd constraint another_constraint_name\nforeign key (FamilyID, ColorID) \n references AllowedColors (FamilyID, ColorID);\n
\nYou'll also want a foreign key from AllowedColors.FamilyID to Family.FamilyID.
\n
soup wrap:
Add a unique constraint to AllowedColors. (And consider dropping the column "ID".)
alter table AllowedColors
add constraint your_constraint_name
unique (FamilyID, ColorID);
You probably want each of those columns to be declared NOT NULL, too. I'll leave that to you.
Now you can use that pair of columns as the target of a foreign key constraint.
alter table fruit
add constraint another_constraint_name
foreign key (FamilyID, ColorID)
references AllowedColors (FamilyID, ColorID);
You'll also want a foreign key from AllowedColors.FamilyID to Family.FamilyID.
qid & accept id:
(17598953, 17599702)
query:
postgresql: select non-outliers from view
soup:
Before Postgres 8.4 there is no built-in way to get a percentage of rows with a single query. Consider this closely related thread on the pgsql-sql list
\nYou could write a function doing the work in a single call. this should work in Postgres 8.3:
\nCREATE OR REPLACE FUNCTION foo(_pct int)\n RETURNS SETOF v_t AS\n$func$\nDECLARE\n _ct int := (SELECT count(*) FROM v_t);\n _offset int := (_ct * $1) / 100;\n _limit int := (_ct * (100 - 2 * $1)) / 100;\nBEGIN\n\nRETURN QUERY\nSELECT *\nFROM v_t\nOFFSET _offset\nLIMIT _limit;\n\nEND\n$func$ LANGUAGE plpgsql;\n
\nCall:
\nSELECT * FROM foo(5)\n
\nThis actually crops 5% from top and bottom.
\nThe return type RETURNS SETOF v_t is derived from a view named v_t directly.
\n-> SQLfiddle for Postgres 8.3.
\n
soup wrap:
Before Postgres 8.4 there is no built-in way to get a percentage of rows with a single query. Consider this closely related thread on the pgsql-sql list
You could write a function doing the work in a single call. this should work in Postgres 8.3:
CREATE OR REPLACE FUNCTION foo(_pct int)
RETURNS SETOF v_t AS
$func$
DECLARE
_ct int := (SELECT count(*) FROM v_t);
_offset int := (_ct * $1) / 100;
_limit int := (_ct * (100 - 2 * $1)) / 100;
BEGIN
RETURN QUERY
SELECT *
FROM v_t
OFFSET _offset
LIMIT _limit;
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM foo(5)
This actually crops 5% from top and bottom.
The return type RETURNS SETOF v_t is derived from a view named v_t directly.
-> SQLfiddle for Postgres 8.3.
qid & accept id:
(17665628, 17665993)
query:
alias column name by lookup query
soup:
How to solve this problem has to do with what you are doing with the result. If you have a front end with some ability program you can do a select like this (I'm assuming all column names are the same in both tables)
\n SELECT "Column Head" as RowType, * FROM TABLEA\nUNION ALL\n SELECT "Column Value" as RowType, * FROM TABLEB\n
\nThis will give you something like this:
\nRowType DPSF0010001 DPSF0010002 DPSF0010003 DPSF0010004 DPSF0010005 DPSF0010006 DPSF0010007 DPSF0010008 DPSF0010009 DPSF0010010 DPSF0010011 DPSF0010012 DPSF0010013 DPSF0010014 DPSF0010015\nColumn Head Total: Under 5 years 5 to 9 years 10 to 14 years 15 to 19 years 20 to 24 years 25 to 29 years 30 to 34 years 35 to 39 years 40 to 44 years 45 to 49 years 50 to 54 years 55 to 59 years 60 to 64 years 65 to 69 years\nColumn Value 4973 139 266 437 391 146 100 78 141 253 425 491 501 477 382\n
\nWhich should be easy to display in whatever your front end is.
\n
soup wrap:
How to solve this problem has to do with what you are doing with the result. If you have a front end with some ability program you can do a select like this (I'm assuming all column names are the same in both tables)
SELECT "Column Head" as RowType, * FROM TABLEA
UNION ALL
SELECT "Column Value" as RowType, * FROM TABLEB
This will give you something like this:
RowType DPSF0010001 DPSF0010002 DPSF0010003 DPSF0010004 DPSF0010005 DPSF0010006 DPSF0010007 DPSF0010008 DPSF0010009 DPSF0010010 DPSF0010011 DPSF0010012 DPSF0010013 DPSF0010014 DPSF0010015
Column Head Total: Under 5 years 5 to 9 years 10 to 14 years 15 to 19 years 20 to 24 years 25 to 29 years 30 to 34 years 35 to 39 years 40 to 44 years 45 to 49 years 50 to 54 years 55 to 59 years 60 to 64 years 65 to 69 years
Column Value 4973 139 266 437 391 146 100 78 141 253 425 491 501 477 382
Which should be easy to display in whatever your front end is.
qid & accept id:
(17670284, 17671028)
query:
How to return record from an Oracle function with JOIN query?
soup:
You can use a strongly typed cursor and its rowtype:
\n-- example data\ncreate table t1(pk number not null primary key, val varchar2(30));\ncreate table t2(\n pk number not null primary key, \n t1_fk references t1(pk), \n val varchar2(30));\n\ninsert into t1(pk, val) values(1, 'value1');\ninsert into t2(pk, t1_fk, val) values(1, 1, 'value2a');\ninsert into t2(pk, t1_fk, val) values(2, 1, 'value2b');\n\ndeclare\n cursor cur is \n select t1.*, t2.val as t2_val \n from t1\n join t2 on t1.pk = t2.t1_fk;\n\n function get_data(arg in pls_integer) return cur%rowtype is\n l_result cur%rowtype;\n begin\n select t1.*, t2.val as t2_val \n into l_result \n from t1 \n join t2 on t1.pk = t2.t1_fk\n where t2.pk = arg;\n return l_result;\n end;\nbegin\n dbms_output.put_line(get_data(2).t2_val);\nend;\n
\nUPDATE: you can easily wrap the cursor and function inside a PL/SQL package:
\ncreate or replace package pkg_get_data as \n\n cursor cur is \n select t1.*, t2.val as t2_val \n from t1\n join t2 on t1.pk = t2.t1_fk;\n\n function get_data(arg in pls_integer) return cur%rowtype;\nend;\n
\n(package body omitted)
\n
soup wrap:
You can use a strongly typed cursor and its rowtype:
-- example data
create table t1(pk number not null primary key, val varchar2(30));
create table t2(
pk number not null primary key,
t1_fk references t1(pk),
val varchar2(30));
insert into t1(pk, val) values(1, 'value1');
insert into t2(pk, t1_fk, val) values(1, 1, 'value2a');
insert into t2(pk, t1_fk, val) values(2, 1, 'value2b');
declare
cursor cur is
select t1.*, t2.val as t2_val
from t1
join t2 on t1.pk = t2.t1_fk;
function get_data(arg in pls_integer) return cur%rowtype is
l_result cur%rowtype;
begin
select t1.*, t2.val as t2_val
into l_result
from t1
join t2 on t1.pk = t2.t1_fk
where t2.pk = arg;
return l_result;
end;
begin
dbms_output.put_line(get_data(2).t2_val);
end;
UPDATE: you can easily wrap the cursor and function inside a PL/SQL package:
create or replace package pkg_get_data as
cursor cur is
select t1.*, t2.val as t2_val
from t1
join t2 on t1.pk = t2.t1_fk;
function get_data(arg in pls_integer) return cur%rowtype;
end;
(package body omitted)
qid & accept id:
(17703008, 17703055)
query:
Counting with SQL
soup:
SELECT count(letter) occurences,\n letter\nFROM table\nGROUP BY letter\nORDER BY letter ASC\n
\nbasically you're looking for the COUNT() function. Be aware that it is an aggregate function and you must use GROUP BY at the end of your SELECT statement
\n
\nif you have your letters on two columns (say col1 and col2) you should first union them in a single one and do the count afterwards, like this:
\nSELECT count(letter) occurences,\n letter\nFROM (SELECT col1 letter\n FROM table\n UNION \n SELECT col2 letter\n FROM table)\nGROUP BY letter \nORDER BY letter;\n
\nthe inner SELECT query appends the content of col2 to col1 and renames the resulting column to "letter". The outer select, counts the occurrences of each letter in this resulting column.
\n
soup wrap:
SELECT count(letter) occurences,
letter
FROM table
GROUP BY letter
ORDER BY letter ASC
basically you're looking for the COUNT() function. Be aware that it is an aggregate function and you must use GROUP BY at the end of your SELECT statement
if you have your letters on two columns (say col1 and col2) you should first union them in a single one and do the count afterwards, like this:
SELECT count(letter) occurences,
letter
FROM (SELECT col1 letter
FROM table
UNION
SELECT col2 letter
FROM table)
GROUP BY letter
ORDER BY letter;
the inner SELECT query appends the content of col2 to col1 and renames the resulting column to "letter". The outer select, counts the occurrences of each letter in this resulting column.
qid & accept id:
(17703863, 17704210)
query:
Finding Top level parent of each row of a table [SQL Server 2008]
soup:
I have also updated the answer in the original question, but never-mind, here is a copy also:
\n;WITH RCTE AS\n(\n SELECT ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy \n\n UNION ALL\n\n SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl \n FROM dbo.RelationHierarchy rh\n INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId\n)\n,CTE_RN AS \n(\n SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN\n FROM RCTE r\n\n)\nSELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName\nFROM dbo.Person pc \nLEFT JOIN CTE_RN r ON pc.id = r.CHildId AND RN =1\nLEFT JOIN dbo.Person pp ON pp.id = r.ParentId\n
\n\nNote that the slight difference is in recursive part of CTE. ChildID is now rewritten each time from the anchor part. Also addition is ROW_NUMBER() function (and new CTE) to get the top level for each child at the end.
\nEDIT - Version2
\nAfter finding a performance issues with first query, here is an improved version. Going from top-to-bottom, instead of other way around - eliminating creating of extra rows in CTE, should be much faster on high number of recursions:
\n;WITH RCTE AS\n(\n SELECT ParentId, CHildId, 1 AS Lvl FROM RelationHierarchy r1\n WHERE NOT EXISTS (SELECT * FROM RelationHierarchy r2 WHERE r2.CHildId = r1.ParentId)\n\n UNION ALL\n\n SELECT rc.ParentId, rh.CHildId, Lvl+1 AS Lvl \n FROM dbo.RelationHierarchy rh\n INNER JOIN RCTE rc ON rc.CHildId = rh.ParentId\n)\nSELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName\nFROM dbo.Person pc \nLEFT JOIN RCTE r ON pc.id = r.CHildId\nLEFT JOIN dbo.Person pp ON pp.id = r.ParentId \n
\n\n
soup wrap:
I have also updated the answer in the original question, but never-mind, here is a copy also:
;WITH RCTE AS
(
SELECT ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy
UNION ALL
SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl
FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId
)
,CTE_RN AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN
FROM RCTE r
)
SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM dbo.Person pc
LEFT JOIN CTE_RN r ON pc.id = r.CHildId AND RN =1
LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
Note that the slight difference is in recursive part of CTE. ChildID is now rewritten each time from the anchor part. Also addition is ROW_NUMBER() function (and new CTE) to get the top level for each child at the end.
EDIT - Version2
After finding a performance issues with first query, here is an improved version. Going from top-to-bottom, instead of other way around - eliminating creating of extra rows in CTE, should be much faster on high number of recursions:
;WITH RCTE AS
(
SELECT ParentId, CHildId, 1 AS Lvl FROM RelationHierarchy r1
WHERE NOT EXISTS (SELECT * FROM RelationHierarchy r2 WHERE r2.CHildId = r1.ParentId)
UNION ALL
SELECT rc.ParentId, rh.CHildId, Lvl+1 AS Lvl
FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rc.CHildId = rh.ParentId
)
SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM dbo.Person pc
LEFT JOIN RCTE r ON pc.id = r.CHildId
LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
qid & accept id:
(17707945, 17708118)
query:
SQL - Find the binary representation from the place of '1's
soup:
You can use binary and and string concatenation:
\nselect (case when test&4 > 0 then '1' else '0' end) +\n (case when test&2 > 0 then '1' else '0' end) +\n (case when test&1 > 0 then '1' else '0' end)\nfrom (select 6 as test) t;\n
\nIf you are allergic to case statements, you could do this:
\nselect CHAR(ascii(0) + (test&4)/4) +\n CHAR(ascii(0) + (test&2)/2) +\n CHAR(ascii(0) + (test&1)/1)\nfrom (select 6 as test) t\n
\n
soup wrap:
You can use binary and and string concatenation:
select (case when test&4 > 0 then '1' else '0' end) +
(case when test&2 > 0 then '1' else '0' end) +
(case when test&1 > 0 then '1' else '0' end)
from (select 6 as test) t;
If you are allergic to case statements, you could do this:
select CHAR(ascii(0) + (test&4)/4) +
CHAR(ascii(0) + (test&2)/2) +
CHAR(ascii(0) + (test&1)/1)
from (select 6 as test) t
qid & accept id:
(17750801, 17750863)
query:
Check if mysql field contains a certain number in mysql query
soup:
http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set
\nSELECT ...\nWHERE FIND_IN_SET(5, list_column)\n
\nBut understand that this search is bound to be very slow. It cannot use an index, and it will cause a full table-scan (reading every row in the table). As the table grows, the query will become unusably slow.
\nPlease read my answer to Is storing a delimited list in a database column really that bad?
\n
\nYou can use @MikeChristensen's answer to be more standard. Another trick with standard SQL is this:
\nselect * from TableName\nwhere ',' || ids || ',' LIKE '%,5,%'\n
\n(in standard SQL, || is the string concatenation operator, but in MySQL, you have to SET SQL_MODE=PIPES_AS_CONCAT or SET SQL_MODE=ANSI to get that behavior.)
\nAnother MySQL-specific solution is to use a special word-boundary regular expression, which will match either the comma punctuation or beginning/end of string:
\nselect * from TableName\nwhere ids RLIKE '[[:<:]]5[[:>:]]'\n
\nNone of these solutions scale well; they all cause table-scans. Sorry I understand you cannot change the database design, but if your project next requires to make the query faster, you can tell them it's not possible without redesigning the table.
\n
soup wrap:
http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set
SELECT ...
WHERE FIND_IN_SET(5, list_column)
But understand that this search is bound to be very slow. It cannot use an index, and it will cause a full table-scan (reading every row in the table). As the table grows, the query will become unusably slow.
Please read my answer to Is storing a delimited list in a database column really that bad?
You can use @MikeChristensen's answer to be more standard. Another trick with standard SQL is this:
select * from TableName
where ',' || ids || ',' LIKE '%,5,%'
(in standard SQL, || is the string concatenation operator, but in MySQL, you have to SET SQL_MODE=PIPES_AS_CONCAT or SET SQL_MODE=ANSI to get that behavior.)
Another MySQL-specific solution is to use a special word-boundary regular expression, which will match either the comma punctuation or beginning/end of string:
select * from TableName
where ids RLIKE '[[:<:]]5[[:>:]]'
None of these solutions scale well; they all cause table-scans. Sorry I understand you cannot change the database design, but if your project next requires to make the query faster, you can tell them it's not possible without redesigning the table.
qid & accept id:
(17769111, 17769271)
query:
How to insert data into a table and get value of a column
soup:
when using a Stored Procedure there are two basic methods:
\n 1. right after the INSERTinto Invoice use SCOPE_IDENTITY()
\n 2. use the INSERT with the OUTPOUT clause
\n
\nafter comment.
\nin the Stored Procedure:
\nDECLARE @Scope_Ident INT\nINSERT [Table] ()\nVALUES ()\n\nSET @Scope_Ident = SCOPE_IDENTITY() \n
\nif then you need return the ID to the application do:
\nSELECT @Scope_Ident\n
\n
soup wrap:
when using a Stored Procedure there are two basic methods:
1. right after the INSERTinto Invoice use SCOPE_IDENTITY()
2. use the INSERT with the OUTPOUT clause
after comment.
in the Stored Procedure:
DECLARE @Scope_Ident INT
INSERT [Table] ()
VALUES ()
SET @Scope_Ident = SCOPE_IDENTITY()
if then you need return the ID to the application do:
SELECT @Scope_Ident
qid & accept id:
(17810221, 17810289)
query:
SQL - return records on the fist date where records exist
soup:
Select Min(Date) \nfrom #DATEDATA\nWhere Date>=@WeekendDate\n
\nor
\nSelect * from #DATEDATA\nwhere Date=\n(\nSelect Min(Date) \nfrom #DATEDATA\nWhere Date>=@WeekendDate\n)\n
\n
soup wrap:
Select Min(Date)
from #DATEDATA
Where Date>=@WeekendDate
or
Select * from #DATEDATA
where Date=
(
Select Min(Date)
from #DATEDATA
Where Date>=@WeekendDate
)
qid & accept id:
(17828198, 17828548)
query:
Sql subquery with inner join
soup:
Your database schema is not completely clear to me, but it seems you can link tourists from the Tourist table to their extra charges in the EXTRA_CHARGES table via the Tourist_Extra_Charges table like this:
\nSELECT T.Tourist_ID\n ,T.Tourist_Name\n ,EC.Extra_Charge_ID\n ,EC.Extra_Charge_Description\nFROM Tourist AS T\nINNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID\nINNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID;\n
\nEDIT
\nIf you want to be able to filter on Reservation_ID, you'll have to join the tables Tourist_Reservations and Reservations as well, like this:
\nSELECT T.Tourist_ID\n ,T.Tourist_Name\n ,EC.Extra_Charge_ID\n ,EC.Extra_Charge_Description\nFROM Tourist AS T\nINNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID\nINNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID\nINNER JOIN Tourist_Reservations AS TR ON T.Tourist_ID = TR.Tourist_ID\nINNER JOIN Reservations AS R ON TR.Reservation_ID = R.Reservation_ID\nWHERE R.Reservation_ID = 27;\n
\nAs for your database schema: please note that the field Extra_Charge_ID is not necessary in your Tourist table: you already link tourists to extra charges via the Tourist_Extra_Charges table. It can be dangerous to the sanity of your data to make these kind of double connections.
\n
soup wrap:
Your database schema is not completely clear to me, but it seems you can link tourists from the Tourist table to their extra charges in the EXTRA_CHARGES table via the Tourist_Extra_Charges table like this:
SELECT T.Tourist_ID
,T.Tourist_Name
,EC.Extra_Charge_ID
,EC.Extra_Charge_Description
FROM Tourist AS T
INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID
INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID;
EDIT
If you want to be able to filter on Reservation_ID, you'll have to join the tables Tourist_Reservations and Reservations as well, like this:
SELECT T.Tourist_ID
,T.Tourist_Name
,EC.Extra_Charge_ID
,EC.Extra_Charge_Description
FROM Tourist AS T
INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID
INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID
INNER JOIN Tourist_Reservations AS TR ON T.Tourist_ID = TR.Tourist_ID
INNER JOIN Reservations AS R ON TR.Reservation_ID = R.Reservation_ID
WHERE R.Reservation_ID = 27;
As for your database schema: please note that the field Extra_Charge_ID is not necessary in your Tourist table: you already link tourists to extra charges via the Tourist_Extra_Charges table. It can be dangerous to the sanity of your data to make these kind of double connections.
qid & accept id:
(17833022, 17833133)
query:
Do arithmatic inside database. Is this possible?
soup:
Try this:
\nupdate cartable set total = stage_1 + stage_2\n
\nIn fact, instead of storing the column total in the database, you could just create a view:
\ncreate view carview as \n select Car, state_1, stage_2, stage_1 + stage_2 as total\n from cartable\n
\n
soup wrap:
Try this:
update cartable set total = stage_1 + stage_2
In fact, instead of storing the column total in the database, you could just create a view:
create view carview as
select Car, state_1, stage_2, stage_1 + stage_2 as total
from cartable
qid & accept id:
(17851492, 17851604)
query:
Getting count from 2 table and group by month
soup:
Join both tables with month:
\nSELECT MONTH(I.date) AS `month`\n , COUNT(I.ID) AS `countin`\n , COUNT(O.ID) AS `countOUT`\n FROM TableIN I\n LEFT JOIN TableOUT O\n ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date)\nUNION\nSELECT MONTH(O.date) AS `month`\n , COUNT(I.ID) AS `countin`\n , COUNT(O.ID) AS `countOUT`\n FROM TableIN I\n RIGHT JOIN TableOUT O\n ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date);\n
\nResult:
\n| MONTH | COUNTIN | COUNTOUT |\n------------------------------\n| 5 | 1 | 1 |\n| 7 | 1 | 1 |\n| 6 | 0 | 1 |\n
\nSee this SQLFiddle
\nAlso to order your result by month you need to use a sub-query like this:
\nSELECT * FROM\n(\n SELECT MONTH(I.date) AS `month`\n , COUNT(I.ID) AS `countin`\n , COUNT(O.ID) AS `countOUT`\n FROM TableIN I\n LEFT JOIN TableOUT O\n ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date)\n UNION\n SELECT MONTH(O.date) AS `month`\n , COUNT(I.ID) AS `countin`\n , COUNT(O.ID) AS `countOUT`\n FROM TableIN I\n RIGHT JOIN TableOUT O\n ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date)\n ) tbl\nORDER BY Month;\n
\nSee this SQLFiddle
\n
soup wrap:
Join both tables with month:
SELECT MONTH(I.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
LEFT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
UNION
SELECT MONTH(O.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
RIGHT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date);
Result:
| MONTH | COUNTIN | COUNTOUT |
------------------------------
| 5 | 1 | 1 |
| 7 | 1 | 1 |
| 6 | 0 | 1 |
See this SQLFiddle
Also to order your result by month you need to use a sub-query like this:
SELECT * FROM
(
SELECT MONTH(I.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
LEFT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
UNION
SELECT MONTH(O.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
RIGHT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
) tbl
ORDER BY Month;
See this SQLFiddle
qid & accept id:
(17875720, 17878753)
query:
SQL Joining 4 Tables
soup:
You don't need to join subqueries back onto the tables that they are sourced from, and you can JOIN directly onto them.
\nRather than JOINING a whole bunch of tabels directly, you could look at forming subqueries that get the correct constituent parts
\nSomething like the following may be what you are after:
\nSELECT tbl_hardware.HW_ID,\n tbl_hardware.Aktiv,\n tbl_hardware.typebradmodelID,\n typebradmodel.Type,\n typebradmodel.Brand,\n typebradmodel.Model,\n lastentry.Login,\n lastentry.since\nFROM (SELECT\n tbl_typebradmodel.typebradmodelID,\n tbl_type.tabel AS Type,\n tbl_brand.tabel AS Brand,\n tbl_model.tabel AS Model\n FROM tbl_typebradmodel\n LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID\n LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID\n LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID\n ) typebradmodel\nLEFT JOIN tbl_hardware ON tbl_hardware.typebradmodelID = typebradmodel.typebradmodelID\nLEFT JOIN \n (SELECT \n MAX(tbl_hardware_assignment.since) AS lastchange, \n tbl_hardware_assignment.HW_ID,\n tbl_accounts.Login\n FROM tbl_hardware_assignment\n LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID\n GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ) lastentry ON tbl_hardware.HW_ID = lastentry.HW_ID\nWHERE tbl_hardware.Aktiv = 1 AND \n typebradmodel.Brand LIKE 'Samsung' AND\n lastentry.Login = 'MY_USERNAME'\n
\nUpdate\nThe critical part here is getting the lastchange subquery correct, i.e. using all the columns that describe the relation between tbl_hardware_assignment and tbl_accounts
\nSELECT \n MAX(tbl_hardware_assignment.since) AS lastchange, \n tbl_hardware_assignment.HW_ID,\n tbl_accounts.Login\nFROM tbl_hardware_assignment\nLEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID\nAND MAX(tbl_hardware_assignment.since) = tbl_accounts.lastchange\nGROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login \n
\ndoes this get the right ID's? and if it doesn't, are you able to find out what the relation between these two tables should involve?
\n
soup wrap:
You don't need to join subqueries back onto the tables that they are sourced from, and you can JOIN directly onto them.
Rather than JOINING a whole bunch of tabels directly, you could look at forming subqueries that get the correct constituent parts
Something like the following may be what you are after:
SELECT tbl_hardware.HW_ID,
tbl_hardware.Aktiv,
tbl_hardware.typebradmodelID,
typebradmodel.Type,
typebradmodel.Brand,
typebradmodel.Model,
lastentry.Login,
lastentry.since
FROM (SELECT
tbl_typebradmodel.typebradmodelID,
tbl_type.tabel AS Type,
tbl_brand.tabel AS Brand,
tbl_model.tabel AS Model
FROM tbl_typebradmodel
LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID
LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID
LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID
) typebradmodel
LEFT JOIN tbl_hardware ON tbl_hardware.typebradmodelID = typebradmodel.typebradmodelID
LEFT JOIN
(SELECT
MAX(tbl_hardware_assignment.since) AS lastchange,
tbl_hardware_assignment.HW_ID,
tbl_accounts.Login
FROM tbl_hardware_assignment
LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID
GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ) lastentry ON tbl_hardware.HW_ID = lastentry.HW_ID
WHERE tbl_hardware.Aktiv = 1 AND
typebradmodel.Brand LIKE 'Samsung' AND
lastentry.Login = 'MY_USERNAME'
Update
The critical part here is getting the lastchange subquery correct, i.e. using all the columns that describe the relation between tbl_hardware_assignment and tbl_accounts
SELECT
MAX(tbl_hardware_assignment.since) AS lastchange,
tbl_hardware_assignment.HW_ID,
tbl_accounts.Login
FROM tbl_hardware_assignment
LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID
AND MAX(tbl_hardware_assignment.since) = tbl_accounts.lastchange
GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login
does this get the right ID's? and if it doesn't, are you able to find out what the relation between these two tables should involve?
qid & accept id:
(17890157, 17890206)
query:
mysql show db column in multiple returned columns
soup:
If you know already that you only have two values for the week, you could use this query:
\nSELECT\n CodeID,\n MAX(CASE WHEN Week=1 THEN ItemID END) Week1,\n MAX(CASE WHEN Week=2 THEN ItemID END) Week2\nFROM\n yourtable\nGROUP BY\n CodeID\n
\nbut if the number of weeks is not known, you should use a dynamic query, like this:
\nSELECT\n CONCAT(\n 'SELECT CodeID,',\n GROUP_CONCAT(\n DISTINCT\n CONCAT('MAX(CASE WHEN Week=', Week, ' THEN ItemID END) Week', Week)),\n ' FROM yourtable GROUP BY CodeID;')\nFROM\n yourtable\nINTO @sql;\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\n
\nPlease see fiddle here.
\nEdit
\nIf there are multiple items in the same week, you could use GROUP_CONCAT aggregated function instead of MAX:
\nSELECT\n CodeID,\n GROUP_CONCAT(DISTINCT CASE WHEN Week=1 THEN ItemID END) Week1,\n GROUP_CONCAT(DISTINCT CASE WHEN Week=2 THEN ItemID END) Week2\nFROM\n yourtable\nGROUP BY\n CodeID;\n
\n
soup wrap:
If you know already that you only have two values for the week, you could use this query:
SELECT
CodeID,
MAX(CASE WHEN Week=1 THEN ItemID END) Week1,
MAX(CASE WHEN Week=2 THEN ItemID END) Week2
FROM
yourtable
GROUP BY
CodeID
but if the number of weeks is not known, you should use a dynamic query, like this:
SELECT
CONCAT(
'SELECT CodeID,',
GROUP_CONCAT(
DISTINCT
CONCAT('MAX(CASE WHEN Week=', Week, ' THEN ItemID END) Week', Week)),
' FROM yourtable GROUP BY CodeID;')
FROM
yourtable
INTO @sql;
PREPARE stmt FROM @sql;
EXECUTE stmt;
Please see fiddle here.
Edit
If there are multiple items in the same week, you could use GROUP_CONCAT aggregated function instead of MAX:
SELECT
CodeID,
GROUP_CONCAT(DISTINCT CASE WHEN Week=1 THEN ItemID END) Week1,
GROUP_CONCAT(DISTINCT CASE WHEN Week=2 THEN ItemID END) Week2
FROM
yourtable
GROUP BY
CodeID;
qid & accept id:
(17910415, 17910512)
query:
using where clause in REPLACE statment
soup:
REPLACE works by matching the primary key. If you specify a primary key value in the REPLACE and no row with that value exists, it works like INSERT. If the primary key value you try to insert already exists in the table, it overwrites the other columns of the row.
\nSo there is no need for a WHERE clause. It's implicitly looking for WHERE pk = value.
\nIf you want it to detect the package detail for a given user and you want to use REPLACE, you must make the userid the primary key.
\nCREATE TABLE userpackages (\n userid INT PRIMARY KEY,\n package_detail TEXT,\n FOREIGN KEY (userid) REFERENCES users(userid)\n);\n
\nFirst we add the user's first package:
\nREPLACE INTO userpackages (userid, package_detail) \nVALUES (1234, 'some package');\n
\nNext we change the package for user 1234:
\nREPLACE INTO userpackages (userid, package_detail) \nVALUES (1234, 'some other package');\n
\nIf userid isn't your primary key, then REPLACE isn't going to work.
\n
soup wrap:
REPLACE works by matching the primary key. If you specify a primary key value in the REPLACE and no row with that value exists, it works like INSERT. If the primary key value you try to insert already exists in the table, it overwrites the other columns of the row.
So there is no need for a WHERE clause. It's implicitly looking for WHERE pk = value.
If you want it to detect the package detail for a given user and you want to use REPLACE, you must make the userid the primary key.
CREATE TABLE userpackages (
userid INT PRIMARY KEY,
package_detail TEXT,
FOREIGN KEY (userid) REFERENCES users(userid)
);
First we add the user's first package:
REPLACE INTO userpackages (userid, package_detail)
VALUES (1234, 'some package');
Next we change the package for user 1234:
REPLACE INTO userpackages (userid, package_detail)
VALUES (1234, 'some other package');
If userid isn't your primary key, then REPLACE isn't going to work.
qid & accept id:
(17925232, 17936398)
query:
How to put data from the database to the template
soup:
Magento works in this way, in MVC design pattern, is different to the usual MVC.\nIn Magento we have:\n- Model\n- View :\n - Blocks\n - Layouts\n - Templates\n The blocks grabs the data from the model, and pass this data to the template, all through the layout system.\n- Controllers
\nSo, the answer to your question is that your need a method in one model, and invoke it through the block, and then, pass the data in this way:\nblock:
\nclass Mynamespace_Mymodule_Block_Myblock extends Mage_Core_Block_Template\n{\n public function getMyProductData()\n {\n $product = Mage::getModel('catalog/product')->load($id);\n return $product; \n }\n} \n
\nAnd then you can retrieve it in your phtml like this:
\n$_product = $this->getMyProductData();\necho $_product->getName();\n
\nGreetings from México :D
\n
soup wrap:
Magento works in this way, in MVC design pattern, is different to the usual MVC.
In Magento we have:
- Model
- View :
- Blocks
- Layouts
- Templates
The blocks grabs the data from the model, and pass this data to the template, all through the layout system.
- Controllers
So, the answer to your question is that your need a method in one model, and invoke it through the block, and then, pass the data in this way:
block:
class Mynamespace_Mymodule_Block_Myblock extends Mage_Core_Block_Template
{
public function getMyProductData()
{
$product = Mage::getModel('catalog/product')->load($id);
return $product;
}
}
And then you can retrieve it in your phtml like this:
$_product = $this->getMyProductData();
echo $_product->getName();
Greetings from México :D
qid & accept id:
(18001322, 18001398)
query:
User Defined Variable in MySQL Insert Query
soup:
try this
\n INSERT INTO msMenu (column1, column2 , column3)\n SELECT COALESCE( MAX( menuId ) , 0 ) +1 ,'My Menu', '1' \n FROM msMenu;\n
\nEDIT2:
\n SET @newId = (select COALESCE( MAX( menuId ) , 0 ) +1 from msMenu)\n INSERT INTO msMenu (column1, column2 , column3)\n SELECT @newId ,'My Menu', '1' \n FROM msMenu;\n
\n
soup wrap:
try this
INSERT INTO msMenu (column1, column2 , column3)
SELECT COALESCE( MAX( menuId ) , 0 ) +1 ,'My Menu', '1'
FROM msMenu;
EDIT2:
SET @newId = (select COALESCE( MAX( menuId ) , 0 ) +1 from msMenu)
INSERT INTO msMenu (column1, column2 , column3)
SELECT @newId ,'My Menu', '1'
FROM msMenu;
qid & accept id:
(18020825, 18021203)
query:
Convert datetime to MM/dd/yyyy HH:MM:SS AM/PM
soup:
Your current SET doesn't even work. When you have a valid datetime value coming in from a string literal, you can do this:
\nDECLARE @adddate DATETIME;\n\nSET @adddate = '2011-07-06T22:30:07.521';\n\nSELECT CONVERT(CHAR(11), @adddate, 103) \n + LTRIM(RIGHT(CONVERT(CHAR(20), @adddate, 22), 11));\n
\nResult:
\n06/07/2011 10:30:07 PM\n
\nIf you actually want m/d/y (your question is ambiguous), there is a slightly shorter path using style 22:
\nDECLARE @adddate DATETIME;\n\nSET @adddate = '2011-07-06T22:30:07.521';\n\nSELECT STUFF(CONVERT(CHAR(20), @adddate, 22), 7, 2, YEAR(@adddate));\n
\nResult:
\n07/06/2011 10:30:07 PM\n
\nHowever, this is a bad idea for two reasons:
\n\nregional formats are confusing (will a reader know 05/06/2013 is May 6th and not June 5th? Depends on where they're from) and even dangerous (if they pass that string back in, you might store June 5th when they meant May 6th).
\nyour client language is better off using it's own Format() or ToString() methods to format this for display at the very last moment possible.
\n
\n
soup wrap:
Your current SET doesn't even work. When you have a valid datetime value coming in from a string literal, you can do this:
DECLARE @adddate DATETIME;
SET @adddate = '2011-07-06T22:30:07.521';
SELECT CONVERT(CHAR(11), @adddate, 103)
+ LTRIM(RIGHT(CONVERT(CHAR(20), @adddate, 22), 11));
Result:
06/07/2011 10:30:07 PM
If you actually want m/d/y (your question is ambiguous), there is a slightly shorter path using style 22:
DECLARE @adddate DATETIME;
SET @adddate = '2011-07-06T22:30:07.521';
SELECT STUFF(CONVERT(CHAR(20), @adddate, 22), 7, 2, YEAR(@adddate));
Result:
07/06/2011 10:30:07 PM
However, this is a bad idea for two reasons:
regional formats are confusing (will a reader know 05/06/2013 is May 6th and not June 5th? Depends on where they're from) and even dangerous (if they pass that string back in, you might store June 5th when they meant May 6th).
your client language is better off using it's own Format() or ToString() methods to format this for display at the very last moment possible.
qid & accept id:
(18105224, 18105394)
query:
Convert varchar data to datetime in SQL server when source data is w/o format
soup:
You can make it a little more compact by not forcing the dashes, and using STUFF instead of SUBSTRING:
\nDECLARE @Var VARCHAR(100) = '20130120161643730';\n\nSET @Var = LEFT(@Var, 8) + ' ' \n + STUFF(STUFF(STUFF(RIGHT(@Var, 9),3,0,':'),6,0,':'),9,0,'.');\n\nSELECT [string] = @Var, [datetime] = CONVERT(DATETIME, @Var);\n
\nResults:
\nstring datetime\n--------------------- -----------------------\n20130120 16:16:43.730 2013-01-20 16:16:43.730\n
\n
soup wrap:
You can make it a little more compact by not forcing the dashes, and using STUFF instead of SUBSTRING:
DECLARE @Var VARCHAR(100) = '20130120161643730';
SET @Var = LEFT(@Var, 8) + ' '
+ STUFF(STUFF(STUFF(RIGHT(@Var, 9),3,0,':'),6,0,':'),9,0,'.');
SELECT [string] = @Var, [datetime] = CONVERT(DATETIME, @Var);
Results:
string datetime
--------------------- -----------------------
20130120 16:16:43.730 2013-01-20 16:16:43.730
qid & accept id:
(18107553, 18107667)
query:
How to replace an int with text in a query
soup:
You should consider storing the lookup in a new table... but just so you're aware of your options, you can also use the DATENAME(WEEKDAY) function:
\nSELECT DATENAME(WEEKDAY, 0)\n
\nReturns:
\nMonday\n
\n\n
soup wrap:
You should consider storing the lookup in a new table... but just so you're aware of your options, you can also use the DATENAME(WEEKDAY) function:
SELECT DATENAME(WEEKDAY, 0)
Returns:
Monday
qid & accept id:
(18111896, 18111953)
query:
Filling in missing data
soup:
Something pretty basic could be
\nSELECT MT.Date, MT.Text, \n CASE WHEN MT.Text = 'bbb' THEN Number\n ELSE (SELECT TOP 1 Number \n FROM MyTable MT2 \n WHERE MT2.Date < MT.Date AND \n MT2.Text = 'bbb'\n ORDER BY MT2.Date DESC)\n END Number,\n CASE WHEN MT.Text = 'bbb' THEN Number2\n ELSE (SELECT TOP 1 Number2 \n FROM MyTable MT2 \n WHERE MT2.Date < MT.Date AND \n MT2.Text = 'bbb'\n ORDER BY MT2.Date DESC)\n END Number2 \n FROM MyTable MT\n
\nSQLFiddle: http://sqlfiddle.com/#!3/cbee5/3
\nor using OUTER APPLY (it should be faster)
\nSELECT MT.Date, MT.Text, \n CASE WHEN MT.Text = 'bbb' THEN MT.Number\n ELSE MT2.Number \n END Number,\n CASE WHEN MT.Text = 'bbb' THEN MT.Number2\n ELSE MT2.Number2\n END Number2\n FROM MyTable MT\n OUTER APPLY (SELECT TOP 1 MT2.Number, MT2.Number2 \n FROM MyTable MT2\n WHERE MT.Text <> 'bbb' AND \n MT2.Text = 'bbb' AND \n MT2.Date < MT.Date\n ORDER BY MT2.Date DESC\n ) MT2\n
\nSQLFiddle: http://sqlfiddle.com/#!3/cbee5/7
\n
soup wrap:
Something pretty basic could be
SELECT MT.Date, MT.Text,
CASE WHEN MT.Text = 'bbb' THEN Number
ELSE (SELECT TOP 1 Number
FROM MyTable MT2
WHERE MT2.Date < MT.Date AND
MT2.Text = 'bbb'
ORDER BY MT2.Date DESC)
END Number,
CASE WHEN MT.Text = 'bbb' THEN Number2
ELSE (SELECT TOP 1 Number2
FROM MyTable MT2
WHERE MT2.Date < MT.Date AND
MT2.Text = 'bbb'
ORDER BY MT2.Date DESC)
END Number2
FROM MyTable MT
SQLFiddle: http://sqlfiddle.com/#!3/cbee5/3
or using OUTER APPLY (it should be faster)
SELECT MT.Date, MT.Text,
CASE WHEN MT.Text = 'bbb' THEN MT.Number
ELSE MT2.Number
END Number,
CASE WHEN MT.Text = 'bbb' THEN MT.Number2
ELSE MT2.Number2
END Number2
FROM MyTable MT
OUTER APPLY (SELECT TOP 1 MT2.Number, MT2.Number2
FROM MyTable MT2
WHERE MT.Text <> 'bbb' AND
MT2.Text = 'bbb' AND
MT2.Date < MT.Date
ORDER BY MT2.Date DESC
) MT2
SQLFiddle: http://sqlfiddle.com/#!3/cbee5/7
qid & accept id:
(18146788, 18149151)
query:
From XML to list of paths in Oracle PL/SQL environment
soup:
You can use XMLTable to produce list of paths with XQuery.
\nE.g.
\n\nwith params as (\n select \n xmltype('\n \n 0123 \n 2345 \n \n 3 \n \n \n ') p_xml\n from dual \n) \nselect\n path_name || '/text()'\nfrom\n XMLTable(\n '\n for $i in $doc/descendant-or-self::*\n return {$i/string-join(ancestor-or-self::*/name(.), ''/'')} \n '\n passing (select p_xml from params) as "doc"\n columns path_name varchar2(4000) path '//element_path'\n )\n
\nbut it's a wrong way at least because it's not effective as it can.
\nJust extract all values with same XQuery:\n(SQLFiddle)
\nwith params as (\n select \n xmltype('\n \n 0123 \n 2345 \n \n 3 \n \n \n ') p_xml\n from dual \n) \nselect\n element_path, element_text\nfrom\n XMLTable(\n ' \n for $i in $doc/descendant-or-self::*\n return \n {$i/string-join(ancestor-or-self::*/name(.), ''/'')} \n {$i/text()} \n \n '\n passing (select p_xml from params) as "doc"\n columns \n element_path varchar2(4000) path '//element_path',\n element_text varchar2(4000) path '//element_content'\n )\n
\n
soup wrap:
You can use XMLTable to produce list of paths with XQuery.
E.g.
with params as (
select
xmltype('
0123
2345
3
') p_xml
from dual
)
select
path_name || '/text()'
from
XMLTable(
'
for $i in $doc/descendant-or-self::*
return {$i/string-join(ancestor-or-self::*/name(.), ''/'')}
'
passing (select p_xml from params) as "doc"
columns path_name varchar2(4000) path '//element_path'
)
but it's a wrong way at least because it's not effective as it can.
Just extract all values with same XQuery:
(SQLFiddle)
with params as (
select
xmltype('
0123
2345
3
') p_xml
from dual
)
select
element_path, element_text
from
XMLTable(
'
for $i in $doc/descendant-or-self::*
return
{$i/string-join(ancestor-or-self::*/name(.), ''/'')}
{$i/text()}
'
passing (select p_xml from params) as "doc"
columns
element_path varchar2(4000) path '//element_path',
element_text varchar2(4000) path '//element_content'
)
qid & accept id:
(18186212, 18335335)
query:
How to escape the "." reserved symbol when using an input for an sql script
soup:
While i waited for an answer i found the following solutions:
\n "set define off" and using \.\n
\nOR
\n "set escape ON" and using .\n
\nAnd turning the properties to its default value after using them. I ended up using Nicholas Krasnov's solution of using a "&1..TABLEX" because it didnt require any property change. Thank you!
\n
soup wrap:
While i waited for an answer i found the following solutions:
"set define off" and using \.
OR
"set escape ON" and using .
And turning the properties to its default value after using them. I ended up using Nicholas Krasnov's solution of using a "&1..TABLEX" because it didnt require any property change. Thank you!
qid & accept id:
(18187989, 18188052)
query:
Query to get only one row from multiple rows having same values
soup:
To get the latest row in MySQL, you need to use a join or correlated subquery:
\nSELECT id, user_receiver, user_sender, post_id, action, date, is_read\nFROM notification n\nWHERE user_receiver=$ses_user and\n date = (select max(date)\n from notification n2\n where n2.user_sender = n.user_sender and\n n2.action = n.action and\n n2.post_id = n.post_id and\n n2.is_read = n.is_read\n )\norder by date desc;\n
\nIn other databases, you would simply use the row_number() function (or distinct on in Postgres).
\nEDIT:
\nFor the biggest id:
\nSELECT id, user_receiver, user_sender, post_id, action, date, is_read\nFROM notification n\nWHERE user_receiver=$ses_user and\n id = (select max(id)\n from notification n2\n where n2.user_sender = n.user_sender and\n n2.action = n.action and\n n2.post_id = n.post_id\n )\norder by date desc;\n
\nIf you want the number of rows where isread = 1, then you can do something like:
\nSELECT sum(is_read = 1)\nFROM notification n\nWHERE user_receiver=$ses_user and\n id = (select max(id)\n from notification n2\n where n2.user_sender = n.user_sender and\n n2.action = n.action and\n n2.post_id = n.post_id\n );\n
\n
soup wrap:
To get the latest row in MySQL, you need to use a join or correlated subquery:
SELECT id, user_receiver, user_sender, post_id, action, date, is_read
FROM notification n
WHERE user_receiver=$ses_user and
date = (select max(date)
from notification n2
where n2.user_sender = n.user_sender and
n2.action = n.action and
n2.post_id = n.post_id and
n2.is_read = n.is_read
)
order by date desc;
In other databases, you would simply use the row_number() function (or distinct on in Postgres).
EDIT:
For the biggest id:
SELECT id, user_receiver, user_sender, post_id, action, date, is_read
FROM notification n
WHERE user_receiver=$ses_user and
id = (select max(id)
from notification n2
where n2.user_sender = n.user_sender and
n2.action = n.action and
n2.post_id = n.post_id
)
order by date desc;
If you want the number of rows where isread = 1, then you can do something like:
SELECT sum(is_read = 1)
FROM notification n
WHERE user_receiver=$ses_user and
id = (select max(id)
from notification n2
where n2.user_sender = n.user_sender and
n2.action = n.action and
n2.post_id = n.post_id
);
qid & accept id:
(18251762, 18251844)
query:
Remove duplicates if you have only one column with value
soup:
if you are allowed to use CTE:
\nwith cte as (\n select\n row_number() over(partition by Value order by Value) as row_num,\n Value\n from Table1\n)\ndelete from cte where row_num > 1\n
\n\nas t-clausen.dk suggested in comments, you don't even need value inside the CTE:
\nwith cte as (\n select\n row_number() over(partition by Value order by Value) as row_num\n from Table1\n)\ndelete from cte where row_num > 1;\n
\n
soup wrap:
if you are allowed to use CTE:
with cte as (
select
row_number() over(partition by Value order by Value) as row_num,
Value
from Table1
)
delete from cte where row_num > 1
as t-clausen.dk suggested in comments, you don't even need value inside the CTE:
with cte as (
select
row_number() over(partition by Value order by Value) as row_num
from Table1
)
delete from cte where row_num > 1;
qid & accept id:
(18277282, 18277521)
query:
Time Since Last Purchase
soup:
I think this is most easily done with a correlated subquery:
\nselect t.*,\n datediff((select t2.TransactionDate\n from t t2\n where t2.CustomerId = t.CustomerId and\n t2.TransactionDate < t.TransactionDate\n order by t2.TransactionDate desc\n limit 1\n ), t.TransactionDate) as daysSinceLastPurchase\nfrom t;\n
\nThis makes the assumption that transactions occur on different days.
\nIf this assumption is not true and the transaction ids are in ascending order, you can use:
\nselect t.*,\n datediff((select t2.TransactionDate\n from t t2\n where t2.CustomerId = t.CustomerId and\n t2.TransactionId < t.TransactionId\n order by t2.TransactionId desc\n limit 1\n ), t.TransactionDate) as daysSinceLastPurchase\nfrom t;\n
\n
soup wrap:
I think this is most easily done with a correlated subquery:
select t.*,
datediff((select t2.TransactionDate
from t t2
where t2.CustomerId = t.CustomerId and
t2.TransactionDate < t.TransactionDate
order by t2.TransactionDate desc
limit 1
), t.TransactionDate) as daysSinceLastPurchase
from t;
This makes the assumption that transactions occur on different days.
If this assumption is not true and the transaction ids are in ascending order, you can use:
select t.*,
datediff((select t2.TransactionDate
from t t2
where t2.CustomerId = t.CustomerId and
t2.TransactionId < t.TransactionId
order by t2.TransactionId desc
limit 1
), t.TransactionDate) as daysSinceLastPurchase
from t;
qid & accept id:
(18289563, 18289719)
query:
How to retrieve samples from the database?
soup:
If you want to get all posts that have tags in a comma delimited list:
\nselect postid\nfrom post_tags\nwhere find_in_set(tagid, @LIST) > 0\ngroup by postid\nhaving count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''));\n
\nIf you want just a "sample" of them:
\nselect postid\nfrom (select postid\n from post_tags\n where find_in_set(tagid, @LIST) > 0\n group by postid\n having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''))\n ) t\norder by rand()\nlimit 5\n
\n
soup wrap:
If you want to get all posts that have tags in a comma delimited list:
select postid
from post_tags
where find_in_set(tagid, @LIST) > 0
group by postid
having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''));
If you want just a "sample" of them:
select postid
from (select postid
from post_tags
where find_in_set(tagid, @LIST) > 0
group by postid
having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''))
) t
order by rand()
limit 5
qid & accept id:
(18320028, 18320074)
query:
Get the names of all Triggers currently in the database via SQL statement (Oracle SQL Developer)
soup:
What you have is pretty close:
\nselect owner, object_name\nfrom all_objects\nwhere object_type = 'TRIGGER'\n
\nOr more usefully:
\nselect owner, trigger_name, table_owner, table_name, triggering_event\nfrom all_triggers\n
\nall_triggers has other columns to give you more information that all_objects does, like when the trigger fires. You can get more information about this and other useful data dictionary view in the documentation.
\n
soup wrap:
What you have is pretty close:
select owner, object_name
from all_objects
where object_type = 'TRIGGER'
Or more usefully:
select owner, trigger_name, table_owner, table_name, triggering_event
from all_triggers
all_triggers has other columns to give you more information that all_objects does, like when the trigger fires. You can get more information about this and other useful data dictionary view in the documentation.
qid & accept id:
(18359263, 18359482)
query:
Copying Data from one table into another and simultaneously add another column
soup:
Here is an example using create table as syntax:
\nCREATE TABLE NEW_TBL AS\n SELECT Col1, Col2, Col3, 'Newcol' as Col4\n FROM OLD_TBL;\n
\nTo assign a data type, use cast() or convert() to get the type you want:
\nCREATE TABLE NEW_TBL AS\n SELECT Col1, Col2, Col3, cast('Newcol' as varchar(255) as Col4,\n cast(123 as decimal(18, 2)) as col4\n FROM OLD_TBL;\n
\nBy the way, you can also add the column directly to the old table:
\nalter table old_tbl add col4 varchar(255);\n
\nYou can then update the value there, if you wish.
\n
soup wrap:
Here is an example using create table as syntax:
CREATE TABLE NEW_TBL AS
SELECT Col1, Col2, Col3, 'Newcol' as Col4
FROM OLD_TBL;
To assign a data type, use cast() or convert() to get the type you want:
CREATE TABLE NEW_TBL AS
SELECT Col1, Col2, Col3, cast('Newcol' as varchar(255) as Col4,
cast(123 as decimal(18, 2)) as col4
FROM OLD_TBL;
By the way, you can also add the column directly to the old table:
alter table old_tbl add col4 varchar(255);
You can then update the value there, if you wish.
qid & accept id:
(18377746, 18378117)
query:
Change/Update part of string in MySQL
soup:
Try
\nUPDATE ifns_code INNER JOIN\n( SELECT name n, REPLACE(fio,'**!!!**','**???**') f FROM ifns_code ) t ON n=name\nSET ifns_code.fio=REPLACE(REPLACE(f,'**!!!**',code),'**???**',name)\n
\nThis will do both replace operations, first the three letter code (whose name I don't know, I have used code as a name) and then the name. If you want to leave the last **!!!** instance to remain as it is, just replace name with **!!!** in the outer REPLACEfunction.
\nEdit:
\nNow, having a clear description of what you want, I can provide you with the desired UPDATE statement:
\nUPDATE ifns_code INNER JOIN (\n SELECT name n,instr(fio,'Profile/') i,instr(fio,'">
http://sqlfiddle.com/#!8/3c1a4/1\nIn the derived table expression I evaluate the positions before (i) and after (j) the string portion I want to change. The rest is just a combination of substring and concat.
\n
soup wrap:
Try
UPDATE ifns_code INNER JOIN
( SELECT name n, REPLACE(fio,'**!!!**','**???**') f FROM ifns_code ) t ON n=name
SET ifns_code.fio=REPLACE(REPLACE(f,'**!!!**',code),'**???**',name)
This will do both replace operations, first the three letter code (whose name I don't know, I have used code as a name) and then the name. If you want to leave the last **!!!** instance to remain as it is, just replace name with **!!!** in the outer REPLACEfunction.
Edit:
Now, having a clear description of what you want, I can provide you with the desired UPDATE statement:
UPDATE ifns_code INNER JOIN (
SELECT name n,instr(fio,'Profile/') i,instr(fio,'">
http://sqlfiddle.com/#!8/3c1a4/1
In the derived table expression I evaluate the positions before (i) and after (j) the string portion I want to change. The rest is just a combination of substring and concat.
qid & accept id:
(18404055, 18405706)
query:
Index for finding an element in a JSON array
soup:
jsonb in Postgres 9.4+
\nWith the new binary JSON data type jsonb, Postgres 9.4 introduced largely improved index options. You can now have a GIN index on a jsonb array directly:
\nCREATE TABLE tracks (id serial, artists jsonb);\nCREATE INDEX tracks_artists_gin_idx ON tracks USING gin (artists);
\nNo need for a function to convert the array. This would support a query:
\nSELECT * FROM tracks WHERE artists @> '[{"name": "The Dirty Heads"}]';\n
\n@> being the new jsonb "contains" operator, which can use the GIN index. (Not for type json, only jsonb!)
\nOr you use the more specialized, non-default GIN operator class jsonb_path_ops for the index:
\nCREATE INDEX tracks_artists_gin_idx ON tracks\nUSING gin (artists jsonb_path_ops);
\nSame query.
\n
\nIf artists only holds names as displayed in the example, it would be more efficient to store a less redundant JSON value to begin with: just the values as text primitives and the redundant key can be in the column name.
\nNote the difference between JSON objects and primitive types:
\n\nCREATE TABLE tracks (id serial, artistnames jsonb);\nINSERT INTO tracks VALUES (2, '["The Dirty Heads", "Louis Richards"]');\n\nCREATE INDEX tracks_artistnames_gin_idx ON tracks USING gin (artistnames);
\nQuery:
\nSELECT * FROM tracks WHERE artistnames ? 'The Dirty Heads';\n
\n? does not work for object values, just keys and array elements.
\nOr (more efficient if names are repeated often):
\nCREATE INDEX tracks_artistnames_gin_idx ON tracks\nUSING gin (artistnames jsonb_path_ops);\n
\nQuery:
\nSELECT * FROM tracks WHERE artistnames @> '"The Dirty Heads"'::jsonb;\n
\njsonb_path_ops currently only supports indexing the @> operator.
\nThere are more index options, details in the manual.
\njson in Postgres 9.3+
\nThis should work with an IMMUTABLE function:
\nCREATE OR REPLACE FUNCTION json2arr(_j json, _key text)\n RETURNS text[] LANGUAGE sql IMMUTABLE AS\n'SELECT ARRAY(SELECT elem->>_key FROM json_array_elements(_j) elem)';\n
\nCreate this functional index:
\nCREATE INDEX tracks_artists_gin_idx ON tracks\nUSING gin (json2arr(artists, 'name'));\n
\nAnd use a query like this. The expression in the WHERE clause has to match the one in the index:
\nSELECT * FROM tracks\nWHERE '{"The Dirty Heads"}'::text[] <@ (json2arr(artists, 'name'));\n
\nUpdated with feedback in comments. We need to use array operators to support the GIN index.
\nThe "is contained by" operator <@ in this case.
\nNotes on function volatility
\nYou can declare your function IMMUTABLE even if json_array_elements() isn't wasn't.
\nMost JSON functions used to be only STABLE, not IMMUTABLE. There was a discussion on the hackers list to change that. Most are IMMUTABLE now. Check with:
\nSELECT p.proname, p.provolatile\nFROM pg_proc p\nJOIN pg_namespace n ON n.oid = p.pronamespace\nWHERE n.nspname = 'pg_catalog'\nAND p.proname ~~* '%json%';\n
\nFunctional indexes only work with IMMUTABLE functions.
\n
soup wrap:
jsonb in Postgres 9.4+
With the new binary JSON data type jsonb, Postgres 9.4 introduced largely improved index options. You can now have a GIN index on a jsonb array directly:
CREATE TABLE tracks (id serial, artists jsonb);
CREATE INDEX tracks_artists_gin_idx ON tracks USING gin (artists);
No need for a function to convert the array. This would support a query:
SELECT * FROM tracks WHERE artists @> '[{"name": "The Dirty Heads"}]';
@> being the new jsonb "contains" operator, which can use the GIN index. (Not for type json, only jsonb!)
Or you use the more specialized, non-default GIN operator class jsonb_path_ops for the index:
CREATE INDEX tracks_artists_gin_idx ON tracks
USING gin (artists jsonb_path_ops);
Same query.
If artists only holds names as displayed in the example, it would be more efficient to store a less redundant JSON value to begin with: just the values as text primitives and the redundant key can be in the column name.
Note the difference between JSON objects and primitive types:
CREATE TABLE tracks (id serial, artistnames jsonb);
INSERT INTO tracks VALUES (2, '["The Dirty Heads", "Louis Richards"]');
CREATE INDEX tracks_artistnames_gin_idx ON tracks USING gin (artistnames);
Query:
SELECT * FROM tracks WHERE artistnames ? 'The Dirty Heads';
? does not work for object values, just keys and array elements.
Or (more efficient if names are repeated often):
CREATE INDEX tracks_artistnames_gin_idx ON tracks
USING gin (artistnames jsonb_path_ops);
Query:
SELECT * FROM tracks WHERE artistnames @> '"The Dirty Heads"'::jsonb;
jsonb_path_ops currently only supports indexing the @> operator.
There are more index options, details in the manual.
json in Postgres 9.3+
This should work with an IMMUTABLE function:
CREATE OR REPLACE FUNCTION json2arr(_j json, _key text)
RETURNS text[] LANGUAGE sql IMMUTABLE AS
'SELECT ARRAY(SELECT elem->>_key FROM json_array_elements(_j) elem)';
Create this functional index:
CREATE INDEX tracks_artists_gin_idx ON tracks
USING gin (json2arr(artists, 'name'));
And use a query like this. The expression in the WHERE clause has to match the one in the index:
SELECT * FROM tracks
WHERE '{"The Dirty Heads"}'::text[] <@ (json2arr(artists, 'name'));
Updated with feedback in comments. We need to use array operators to support the GIN index.
The "is contained by" operator <@ in this case.
Notes on function volatility
You can declare your function IMMUTABLE even if json_array_elements() isn't wasn't.
Most JSON functions used to be only STABLE, not IMMUTABLE. There was a discussion on the hackers list to change that. Most are IMMUTABLE now. Check with:
SELECT p.proname, p.provolatile
FROM pg_proc p
JOIN pg_namespace n ON n.oid = p.pronamespace
WHERE n.nspname = 'pg_catalog'
AND p.proname ~~* '%json%';
Functional indexes only work with IMMUTABLE functions.
qid & accept id:
(18410600, 18410959)
query:
Selecting the most recent, lowest price from multiple vendors for an inventory item
soup:
Much simpler with DISTINCT ON in Postgres:
\nCurrent price per item for each vendor
\nSELECT DISTINCT ON (p.item_id, p.vendor_id)\n i.title, p.price, p.vendor_id\nFROM prices p\nJOIN items i ON i.id = p.item_id\nORDER BY p.item_id, p.vendor_id, p.created_at DESC;\n
\nOptimal vendor for each item
\nSELECT DISTINCT ON (item_id) \n i.title, p.price, p.vendor_id -- add more columns as you need\nFROM (\n SELECT DISTINCT ON (item_id, vendor_id)\n item_id, price, vendor_id -- add more columns as you need\n FROM prices p\n ORDER BY item_id, vendor_id, created_at DESC\n ) p\nJOIN items i ON i.id = p.item_id\nORDER BY item_id, price;\n
\n\nDetailed explanation:
\nSelect first row in each GROUP BY group?
\n
soup wrap:
Much simpler with DISTINCT ON in Postgres:
Current price per item for each vendor
SELECT DISTINCT ON (p.item_id, p.vendor_id)
i.title, p.price, p.vendor_id
FROM prices p
JOIN items i ON i.id = p.item_id
ORDER BY p.item_id, p.vendor_id, p.created_at DESC;
Optimal vendor for each item
SELECT DISTINCT ON (item_id)
i.title, p.price, p.vendor_id -- add more columns as you need
FROM (
SELECT DISTINCT ON (item_id, vendor_id)
item_id, price, vendor_id -- add more columns as you need
FROM prices p
ORDER BY item_id, vendor_id, created_at DESC
) p
JOIN items i ON i.id = p.item_id
ORDER BY item_id, price;
Detailed explanation:
Select first row in each GROUP BY group?
qid & accept id:
(18415438, 18415525)
query:
SQL Query Sum and total of rows
soup:
Try this query:
\nSELECT ITEM\n ,SUM(CASE WHEN LOCATION = 001 THEN QUANTITY ELSE 0 END) AS Location_001\n ,SUM(CASE WHEN LOCATION = 002 THEN QUANTITY ELSE 0 END) AS Location_002\n ,SUM(CASE WHEN LOCATION = 003 THEN QUANTITY ELSE 0 END) AS Location_003\n ,SUM(Quantity) AS Total\nFROM Table1\nGROUP BY ITEM;\n
\nIn case if you don't know Locations, you can try this dynamic query:
\nSET @sql = NULL;\nSELECT\n GROUP_CONCAT(DISTINCT\n CONCAT(\n 'SUM(CASE WHEN `LOCATION` = ''',\n `LOCATION`,\n ''' THEN QUANTITY ELSE 0 END) AS `',\n `LOCATION`, '`'\n )\n ) INTO @sql\nFROM Table1;\n\nSET @sql = CONCAT('SELECT ITEM, ', @sql,'\n ,SUM(Quantity) AS Total \n FROM Table1\n GROUP BY ITEM\n ');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
\nResult:
\n| ITEM | 1 | 2 | 3 | TOTAL |\n|----------|---|---|---|-------|\n| BLUE CAR | 0 | 2 | 5 | 7 |\n| RED CAR | 3 | 8 | 0 | 11 |\n
\nSee this SQLFiddle
\n
soup wrap:
Try this query:
SELECT ITEM
,SUM(CASE WHEN LOCATION = 001 THEN QUANTITY ELSE 0 END) AS Location_001
,SUM(CASE WHEN LOCATION = 002 THEN QUANTITY ELSE 0 END) AS Location_002
,SUM(CASE WHEN LOCATION = 003 THEN QUANTITY ELSE 0 END) AS Location_003
,SUM(Quantity) AS Total
FROM Table1
GROUP BY ITEM;
In case if you don't know Locations, you can try this dynamic query:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'SUM(CASE WHEN `LOCATION` = ''',
`LOCATION`,
''' THEN QUANTITY ELSE 0 END) AS `',
`LOCATION`, '`'
)
) INTO @sql
FROM Table1;
SET @sql = CONCAT('SELECT ITEM, ', @sql,'
,SUM(Quantity) AS Total
FROM Table1
GROUP BY ITEM
');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
Result:
| ITEM | 1 | 2 | 3 | TOTAL |
|----------|---|---|---|-------|
| BLUE CAR | 0 | 2 | 5 | 7 |
| RED CAR | 3 | 8 | 0 | 11 |
See this SQLFiddle
qid & accept id:
(18420123, 18423693)
query:
Count preceding rows that match criteria
soup:
This seems to do it:
\nlibrary(data.table)\nset.seed(50)\nDT <- data.table(NETSALES=ifelse(runif(40)<.15,0,runif(40,1,100)), cust=rep(1:2, each=20), dt=1:20)\nDT[,dir:=ifelse(NETSALES>0,1,0)]\ndir.rle <- rle(DT$dir)\nDT <- transform(DT, indexer = rep(1:length(dir.rle$lengths), dir.rle$lengths))\nDT[,runl:=cumsum(dir),by=indexer]\n
\ncredit to Cumulative sums over run lengths. Can this loop be vectorized?
\n
\nEdit by Roland:
\nHere is the same with better performance and also considering different customers:
\n#no need for ifelse\nDT[,dir:= NETSALES>0]\n\n#use a function to avoid storing the rle, which could be huge\nrunseq <- function(x) {\n x.rle <- rle(x)\n rep(1:length(x.rle$lengths), x.rle$lengths)\n}\n\n#never use transform with data.table\nDT[,indexer := runseq(dir)]\n\n#include cust in by\nDT[,runl:=cumsum(dir),by=list(indexer,cust)]\n
\n
\nEdit: joe added SQL solution\nhttp://sqlfiddle.com/#!6/990eb/22
\nSQL solution is 48 minutes on a machine with 128gig of ram across 22m rows. R solution is about 20 seconds on a workstation with 4 gig of ram. Go R!
\n
soup wrap:
This seems to do it:
library(data.table)
set.seed(50)
DT <- data.table(NETSALES=ifelse(runif(40)<.15,0,runif(40,1,100)), cust=rep(1:2, each=20), dt=1:20)
DT[,dir:=ifelse(NETSALES>0,1,0)]
dir.rle <- rle(DT$dir)
DT <- transform(DT, indexer = rep(1:length(dir.rle$lengths), dir.rle$lengths))
DT[,runl:=cumsum(dir),by=indexer]
credit to Cumulative sums over run lengths. Can this loop be vectorized?
Edit by Roland:
Here is the same with better performance and also considering different customers:
#no need for ifelse
DT[,dir:= NETSALES>0]
#use a function to avoid storing the rle, which could be huge
runseq <- function(x) {
x.rle <- rle(x)
rep(1:length(x.rle$lengths), x.rle$lengths)
}
#never use transform with data.table
DT[,indexer := runseq(dir)]
#include cust in by
DT[,runl:=cumsum(dir),by=list(indexer,cust)]
Edit: joe added SQL solution
http://sqlfiddle.com/#!6/990eb/22
SQL solution is 48 minutes on a machine with 128gig of ram across 22m rows. R solution is about 20 seconds on a workstation with 4 gig of ram. Go R!
qid & accept id:
(18477582, 18477634)
query:
One column, two names, mysql
soup:
I believe you are looking for a view. You can define the view as:
\ncreate view v_table as\n select t.*, `old` as `new`\n from `table` t;\n
\nAssuming no naming conflict, this will give you both.
\nNow, you might want to go a step further. You can rename the old table and have the view take the name of the old table:
\nrename table `table` to `old_table`;\ncreate view t as\n select t.*, `old` as `new`\n from `old_table` t;\n
\nThat way, everything that references table will start using the view with the new column name.
\n
soup wrap:
I believe you are looking for a view. You can define the view as:
create view v_table as
select t.*, `old` as `new`
from `table` t;
Assuming no naming conflict, this will give you both.
Now, you might want to go a step further. You can rename the old table and have the view take the name of the old table:
rename table `table` to `old_table`;
create view t as
select t.*, `old` as `new`
from `old_table` t;
That way, everything that references table will start using the view with the new column name.
qid & accept id:
(18486580, 18486818)
query:
Oracle - calculate number of rows before some condition is applied
soup:
You can use the analytic version of COUNT() in a nested query, e.g.:
\nSELECT * FROM\n(\n SELECT table_name,\n COUNT(*) OVER() AS numberofrows\n FROM all_tables\n WHERE owner = 'SYS'\n ORDER BY table_name\n)\nWHERE rownum < 10;\n
\nYou need to nest it anyway to apply an order-by before the rownum filter to get consistent results, otherwise you get a random(ish) set of rows.
\nYou can also replace rownum with the analytic ROW_NUMBER() function:
\nSELECT table_name, cnt FROM\n(\n SELECT table_name,\n COUNT(*) OVER () AS numberofrows,\n ROW_NUMBER() OVER (ORDER BY table_name) AS rn\n FROM all_tables\n WHERE owner = 'SYS'\n)\nWHERE rn < 10;\n
\n
soup wrap:
You can use the analytic version of COUNT() in a nested query, e.g.:
SELECT * FROM
(
SELECT table_name,
COUNT(*) OVER() AS numberofrows
FROM all_tables
WHERE owner = 'SYS'
ORDER BY table_name
)
WHERE rownum < 10;
You need to nest it anyway to apply an order-by before the rownum filter to get consistent results, otherwise you get a random(ish) set of rows.
You can also replace rownum with the analytic ROW_NUMBER() function:
SELECT table_name, cnt FROM
(
SELECT table_name,
COUNT(*) OVER () AS numberofrows,
ROW_NUMBER() OVER (ORDER BY table_name) AS rn
FROM all_tables
WHERE owner = 'SYS'
)
WHERE rn < 10;
qid & accept id:
(18499562, 18499651)
query:
Connecting to a SQL Server through another Sever connection that's not linked
soup:
You'd need either OPENROWSET\nor OPENDATASOURCE
\nFound examples here:
\nOPENROWSET:
\nSELECT *\nFROM OPENROWSET('SQLNCLI',\n 'DRIVER={SQL Server};SERVER=MyServer;UID=MyUserID;PWD=MyCleverPassword',\n 'select @@ServerName') \n
\nOPENDATASOURCE:
\nSELECT * \nFROM OPENDATASOURCE ('SQLNCLI', -- or SQLNCLI\n 'Data Source=OtherServer\InstanceName;Catalog=RemoteDB;User ID=SQLLogin;Password=Secret;').RemoteDB.dbo.SomeTable\n
\n
soup wrap:
You'd need either OPENROWSET
or OPENDATASOURCE
Found examples here:
OPENROWSET:
SELECT *
FROM OPENROWSET('SQLNCLI',
'DRIVER={SQL Server};SERVER=MyServer;UID=MyUserID;PWD=MyCleverPassword',
'select @@ServerName')
OPENDATASOURCE:
SELECT *
FROM OPENDATASOURCE ('SQLNCLI', -- or SQLNCLI
'Data Source=OtherServer\InstanceName;Catalog=RemoteDB;User ID=SQLLogin;Password=Secret;').RemoteDB.dbo.SomeTable
qid & accept id:
(18513029, 18513282)
query:
MySQL order by points from 2nd table
soup:
You want to move your expression into the select clause:
\nSELECT i.*,\n (SELECT count(*) AS points \n FROM `amenities_index` ai\n WHERE amenity_id in (1, 2) AND\n ai.item_id = i.id\n ) as points\nFROM items i\nORDER BY points desc;\n
\nYou can also do this as a join query with aggregation:
\nSELECT i.*, ai.points\nFROM items i join\n (select ai.item_id, count(*) as points\n from amenities_index ai\n where amenity_id in (1, 2)\n ) ai\n on ai.item_id = i.id\nORDER BY ai.points desc;\n
\nIn most databases, I would prefer this version over the first one. However, MySQL would allow the first in a view but not the second, so it has some strange limitations under some circumstances.
\n
soup wrap:
You want to move your expression into the select clause:
SELECT i.*,
(SELECT count(*) AS points
FROM `amenities_index` ai
WHERE amenity_id in (1, 2) AND
ai.item_id = i.id
) as points
FROM items i
ORDER BY points desc;
You can also do this as a join query with aggregation:
SELECT i.*, ai.points
FROM items i join
(select ai.item_id, count(*) as points
from amenities_index ai
where amenity_id in (1, 2)
) ai
on ai.item_id = i.id
ORDER BY ai.points desc;
In most databases, I would prefer this version over the first one. However, MySQL would allow the first in a view but not the second, so it has some strange limitations under some circumstances.
qid & accept id:
(18534648, 18534798)
query:
Custom ordering using Analytical Functions
soup:
I assume that you want to assign row_number() based on the ordering, because the analytic functions do not "order" tables. Did you try this?
\nSELECT empno, ename, deptno,\n row_number() over (ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3) as seqnum\nFROM emp ;\n
\nYou could also do this without analytic functions at all:
\nselect e.*, rownum as seqnum\nfrom (SELECT empno, ename, deptno\n FROM emp\n ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3)\n ) e\n
\n
soup wrap:
I assume that you want to assign row_number() based on the ordering, because the analytic functions do not "order" tables. Did you try this?
SELECT empno, ename, deptno,
row_number() over (ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3) as seqnum
FROM emp ;
You could also do this without analytic functions at all:
select e.*, rownum as seqnum
from (SELECT empno, ename, deptno
FROM emp
ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3)
) e
qid & accept id:
(18547311, 18547696)
query:
Complex rolling scenario (CROSS APPLY and OUTER APPLY example)
soup:
I assume that you have a DimDate table with the following structure:
\nCREATE TABLE DimDate\n(\nDateKey INT PRIMARY KEY\n);\n
\nand DateKey column doesn't has gaps.
\nSolution:
\nDECLARE @NumDays INT = 3;\n\nWITH basic_cte AS\n (\n SELECT x.DateKey,\n d.Name,\n Amount = ISNULL(f.Amount,0)\n FROM \n (\n SELECT t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey\n FROM #target t\n ) d \n CROSS APPLY\n (\n SELECT dm.DateKey\n FROM DimDate dm\n WHERE dm.DateKey >= d.LiveKey \n AND dm.DateKey < d.EndLiveKey \n ) x\n LEFT OUTER JOIN #Fact f \n ON f.PlayerKey = d.PlayerKey \n AND f.DateKey = x.DateKey\n )\nSELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY DateKey),\n y.*,\n "RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM basic_cte y;\n
\nEdit #1:
\nDECLARE @NumDays INT = 3;\n\nWITH basic_cte AS\n (\n SELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),\n x.DateKey,\n d.Name,\n Amount = ISNULL(f.Amount,0),\n AmountAll = ISNULL(fall.AmountAll,0)\n FROM \n (\n SELECT t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey\n FROM #target t\n ) d \n CROSS APPLY\n (\n SELECT dm.DateKey\n FROM DimDate dm\n WHERE dm.DateKey >= d.LiveKey \n AND dm.DateKey < d.EndLiveKey \n ) x\n OUTER APPLY\n (\n SELECT SUM(fct.Amount) AS Amount\n FROM #Fact fct \n WHERE fct.DateKey = x.DateKey\n AND fct.PlayerKey = d.PlayerKey\n ) f\n OUTER APPLY\n (\n SELECT SUM(fct.Amount) AS AmountAll \n FROM #Fact fct \n WHERE fct.DateKey = x.DateKey\n ) fall\n )\nSELECT \n y.*,\n "RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),\n "RollingAmountAll" = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM basic_cte y;\n
\nEdit #2:
\nDECLARE @NumDays INT = 3;\n\nWITH basic_cte AS\n (\n SELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),\n x.DateKey,\n d.Name,\n Amount = ISNULL(f.Amount,0),\n AmountAll = ISNULL(f.AmountAll,0)\n FROM \n (\n SELECT t.*, EndLiveKey = CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112))\n FROM #target t\n ) d \n CROSS APPLY\n (\n SELECT dm.DateKey\n FROM DimDate dm\n WHERE dm.DateKey >= d.LiveKey \n AND dm.DateKey < d.EndLiveKey \n ) x\n OUTER APPLY\n (\n SELECT AmountAll = SUM(fbase.Amount),\n Amount = SUM(CASE WHEN PlayerKey1 = PlayerKey2 THEN fbase.Amount END)\n FROM\n (\n SELECT fct.Amount, fct.PlayerKey AS PlayerKey1, d.PlayerKey AS PlayerKey2\n FROM #Fact fct \n WHERE fct.DateKey = x.DateKey\n ) fbase\n ) f\n )\nSELECT \n y.*,\n "RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),\n "RollingAmountAll" = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM basic_cte y;\n
\n
soup wrap:
I assume that you have a DimDate table with the following structure:
CREATE TABLE DimDate
(
DateKey INT PRIMARY KEY
);
and DateKey column doesn't has gaps.
Solution:
DECLARE @NumDays INT = 3;
WITH basic_cte AS
(
SELECT x.DateKey,
d.Name,
Amount = ISNULL(f.Amount,0)
FROM
(
SELECT t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey
FROM #target t
) d
CROSS APPLY
(
SELECT dm.DateKey
FROM DimDate dm
WHERE dm.DateKey >= d.LiveKey
AND dm.DateKey < d.EndLiveKey
) x
LEFT OUTER JOIN #Fact f
ON f.PlayerKey = d.PlayerKey
AND f.DateKey = x.DateKey
)
SELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY DateKey),
y.*,
"RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey)
FROM basic_cte y;
Edit #1:
DECLARE @NumDays INT = 3;
WITH basic_cte AS
(
SELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),
x.DateKey,
d.Name,
Amount = ISNULL(f.Amount,0),
AmountAll = ISNULL(fall.AmountAll,0)
FROM
(
SELECT t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey
FROM #target t
) d
CROSS APPLY
(
SELECT dm.DateKey
FROM DimDate dm
WHERE dm.DateKey >= d.LiveKey
AND dm.DateKey < d.EndLiveKey
) x
OUTER APPLY
(
SELECT SUM(fct.Amount) AS Amount
FROM #Fact fct
WHERE fct.DateKey = x.DateKey
AND fct.PlayerKey = d.PlayerKey
) f
OUTER APPLY
(
SELECT SUM(fct.Amount) AS AmountAll
FROM #Fact fct
WHERE fct.DateKey = x.DateKey
) fall
)
SELECT
y.*,
"RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),
"RollingAmountAll" = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)
FROM basic_cte y;
Edit #2:
DECLARE @NumDays INT = 3;
WITH basic_cte AS
(
SELECT rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),
x.DateKey,
d.Name,
Amount = ISNULL(f.Amount,0),
AmountAll = ISNULL(f.AmountAll,0)
FROM
(
SELECT t.*, EndLiveKey = CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112))
FROM #target t
) d
CROSS APPLY
(
SELECT dm.DateKey
FROM DimDate dm
WHERE dm.DateKey >= d.LiveKey
AND dm.DateKey < d.EndLiveKey
) x
OUTER APPLY
(
SELECT AmountAll = SUM(fbase.Amount),
Amount = SUM(CASE WHEN PlayerKey1 = PlayerKey2 THEN fbase.Amount END)
FROM
(
SELECT fct.Amount, fct.PlayerKey AS PlayerKey1, d.PlayerKey AS PlayerKey2
FROM #Fact fct
WHERE fct.DateKey = x.DateKey
) fbase
) f
)
SELECT
y.*,
"RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),
"RollingAmountAll" = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)
FROM basic_cte y;
qid & accept id:
(18570414, 18570443)
query:
how to pass parameter to procedure and call in where clause
soup:
You need to use glb_date = @d_date
\nFirst you'll need to alter how the parameter is defined in the CREATE PROCEDURE definition, for example:
\nCREATE PROCEDURE prac\n(\n @d_date in DATE\n)\n
\nNotice the @
\nThen change your WHERE clause to use the variable:
\n where glb_date= @d_date;\n
\n
soup wrap:
You need to use glb_date = @d_date
First you'll need to alter how the parameter is defined in the CREATE PROCEDURE definition, for example:
CREATE PROCEDURE prac
(
@d_date in DATE
)
Notice the @
Then change your WHERE clause to use the variable:
where glb_date= @d_date;
qid & accept id:
(18575984, 18576134)
query:
Pivot a fixed multiple column table in sql server
soup:
This one will do what you want, but you have to specify all the dates
\nselect\n c.Name,\n max(case when t.DateCreated = '2013-08-26' then c.Value end) as [2013-08-26],\n max(case when t.DateCreated = '2013-08-27' then c.Value end) as [2013-08-27],\n max(case when t.DateCreated = '2013-08-28' then c.Value end) as [2013-08-28],\n max(case when t.DateCreated = '2013-08-29' then c.Value end) as [2013-08-29],\n max(case when t.DateCreated = '2013-08-30' then c.Value end) as [2013-08-30],\n max(case when t.DateCreated = '2013-08-31' then c.Value end) as [2013-08-31],\n max(case when t.DateCreated = '2013-09-01' then c.Value end) as [2013-09-01]\nfrom test as t\n outer apply (\n select 'Rands', Rands union all\n select 'Units', Units union all\n select 'Average Price', [Average Price] union all\n select 'Success %', [Success %] union all\n select 'Unique Users', [Unique Users]\n ) as C(Name, Value)\ngroup by c.Name\n
\nYou can create a dynamic SQL for this, something like this:
\ndeclare @stmt nvarchar(max)\n\nselect @stmt = isnull(@stmt + ',', '') + \n 'max(case when t.DateCreated = ''' + convert(nvarchar(8), t.DateCreated, 112) + ''' then c.Value end) as [' + convert(nvarchar(8), t.DateCreated, 112) + ']'\nfrom test as t\n\nselect @stmt = '\n select\n c.Name, ' + @stmt + ' from test as t\n outer apply (\n select ''Rands'', Rands union all\n select ''Units'', Units union all\n select ''Average Price'', [Average Price] union all\n select ''Success %'', [Success %] union all\n select ''Unique Users'', [Unique Users]\n ) as C(Name, Value)\n group by c.Name'\n\nexec sp_executesql @stmt = @stmt\n
\n
soup wrap:
This one will do what you want, but you have to specify all the dates
select
c.Name,
max(case when t.DateCreated = '2013-08-26' then c.Value end) as [2013-08-26],
max(case when t.DateCreated = '2013-08-27' then c.Value end) as [2013-08-27],
max(case when t.DateCreated = '2013-08-28' then c.Value end) as [2013-08-28],
max(case when t.DateCreated = '2013-08-29' then c.Value end) as [2013-08-29],
max(case when t.DateCreated = '2013-08-30' then c.Value end) as [2013-08-30],
max(case when t.DateCreated = '2013-08-31' then c.Value end) as [2013-08-31],
max(case when t.DateCreated = '2013-09-01' then c.Value end) as [2013-09-01]
from test as t
outer apply (
select 'Rands', Rands union all
select 'Units', Units union all
select 'Average Price', [Average Price] union all
select 'Success %', [Success %] union all
select 'Unique Users', [Unique Users]
) as C(Name, Value)
group by c.Name
You can create a dynamic SQL for this, something like this:
declare @stmt nvarchar(max)
select @stmt = isnull(@stmt + ',', '') +
'max(case when t.DateCreated = ''' + convert(nvarchar(8), t.DateCreated, 112) + ''' then c.Value end) as [' + convert(nvarchar(8), t.DateCreated, 112) + ']'
from test as t
select @stmt = '
select
c.Name, ' + @stmt + ' from test as t
outer apply (
select ''Rands'', Rands union all
select ''Units'', Units union all
select ''Average Price'', [Average Price] union all
select ''Success %'', [Success %] union all
select ''Unique Users'', [Unique Users]
) as C(Name, Value)
group by c.Name'
exec sp_executesql @stmt = @stmt
qid & accept id:
(18613117, 18614557)
query:
Sorting data from two different sorted cursors data of different tables into One
soup:
You could combine both queries into a single query.
\nFirst, ensure that both results have the same number of columns.\nIf not, you might need to add some dummy column(s) to one query.
\nThen combine the two with UNION ALL:
\nSELECT alpha, beeta, gamma, Remark, id, number FROM X\nUNION ALL\nSELECT Type, Date, gamma, Obs, NULL, number FROM Y\n
\nThen pick one column of the entire result that you want to order by.\n(The column names of the result come from the first query.)\nIn this case, the Start column is not part of the result, so we have to add it (and the Date column is duplicated in the second query, but this is necessary for its values to end up in the result column that is used for sorting):
\nSELECT alpha, beeta, gamma, Remark, id, number, Start AS SortThis FROM X\nUNION ALL\nSELECT Type, Date, gamma, Obs, NULL, number, Date FROM Y\nORDER BY SortThis\n
\n
soup wrap:
You could combine both queries into a single query.
First, ensure that both results have the same number of columns.
If not, you might need to add some dummy column(s) to one query.
Then combine the two with UNION ALL:
SELECT alpha, beeta, gamma, Remark, id, number FROM X
UNION ALL
SELECT Type, Date, gamma, Obs, NULL, number FROM Y
Then pick one column of the entire result that you want to order by.
(The column names of the result come from the first query.)
In this case, the Start column is not part of the result, so we have to add it (and the Date column is duplicated in the second query, but this is necessary for its values to end up in the result column that is used for sorting):
SELECT alpha, beeta, gamma, Remark, id, number, Start AS SortThis FROM X
UNION ALL
SELECT Type, Date, gamma, Obs, NULL, number, Date FROM Y
ORDER BY SortThis
qid & accept id:
(18619973, 18620578)
query:
Date a year from now and check what is the next Term from that Date
soup:
I think you are over complicating the problem, but as you requested, try this:
\nDECLARE @terms TABLE(term varchar(50),termStartDate date, termEndDate date)\nINSERT INTO @terms VALUES('Fall 2012','8/27/2012','12/15/2012')\nINSERT INTO @terms VALUES('Spring 2013','1/14/2013','4/26/2013')\nINSERT INTO @terms VALUES('Sumr I 2013','5/6/2013','6/29/2013')\nINSERT INTO @terms VALUES('Sumr II 2013','7/1/2013','8/24/2013')\nINSERT INTO @terms VALUES('Fall 2013','8/26/2013','12/14/2013')\nINSERT INTO @terms VALUES('Spring 2014','1/13/2014','4/26/2014')\n\nDECLARE @today date =GETDATE()\nSELECT @today = termEndDate \n FROM @terms \n WHERE termStartDate<=@today AND termEndDate>=@today\nSELECT term \n FROM @terms \n WHERE termStartDate>=DATEADD(d,-360,@today) AND termStartDate<=GETDATE()\n
\nThis will list all terms included in the period 360 days prior to the end of the current term.
\nUPDATE
\nSELECT min(termStartDate)startDate FROM (\n SELECT termStartDate \n FROM @terms \n GROUP BY termStartDate \n HAVING termStartDate>=DATEADD(d,-360,@today) \n AND termStartDate<=GETDATE()\n)z\n
\nwill get the startDate for the earliest term.
\n
soup wrap:
I think you are over complicating the problem, but as you requested, try this:
DECLARE @terms TABLE(term varchar(50),termStartDate date, termEndDate date)
INSERT INTO @terms VALUES('Fall 2012','8/27/2012','12/15/2012')
INSERT INTO @terms VALUES('Spring 2013','1/14/2013','4/26/2013')
INSERT INTO @terms VALUES('Sumr I 2013','5/6/2013','6/29/2013')
INSERT INTO @terms VALUES('Sumr II 2013','7/1/2013','8/24/2013')
INSERT INTO @terms VALUES('Fall 2013','8/26/2013','12/14/2013')
INSERT INTO @terms VALUES('Spring 2014','1/13/2014','4/26/2014')
DECLARE @today date =GETDATE()
SELECT @today = termEndDate
FROM @terms
WHERE termStartDate<=@today AND termEndDate>=@today
SELECT term
FROM @terms
WHERE termStartDate>=DATEADD(d,-360,@today) AND termStartDate<=GETDATE()
This will list all terms included in the period 360 days prior to the end of the current term.
UPDATE
SELECT min(termStartDate)startDate FROM (
SELECT termStartDate
FROM @terms
GROUP BY termStartDate
HAVING termStartDate>=DATEADD(d,-360,@today)
AND termStartDate<=GETDATE()
)z
will get the startDate for the earliest term.
qid & accept id:
(18629310, 18629411)
query:
Split string with proper format
soup:
Reverse the sting and search for the index of the first \. Then get the right of your column using this index.
\nSELECT RIGHT(Filename,PATINDEX('%\%',REVERSE(Filename))-1)\n
\nIf you want to turn File_1.70837292036d41139fcf8fa6b4997d3c.pdf to File_1.pdf then you could try the following, though it might look uggly:
\nSELECT \nLEFT\n(\n RIGHT\n (\n Filepath,\n CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n ELSE LEN(Filepath) \n END \n ),\n CASE WHEN \n PATINDEX\n (\n '%.%',\n RIGHT\n (\n Filepath,\n CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n ELSE LEN(Filepath) \n END\n )\n )>0\n THEN\n PATINDEX\n (\n '%.%',\n RIGHT\n (\n Filepath,\n CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n ELSE LEN(Filepath) \n END\n )\n )-1\n ELSE 0 END\n)\n+\nRIGHT\n(\n Filepath,\n CASE WHEN PATINDEX('%.%',REVERSE(Filepath)) > 0 \n THEN PATINDEX('%.%',REVERSE(Filepath)) \n ELSE LEN(Filepath) \n END\n)\n
\n
soup wrap:
Reverse the sting and search for the index of the first \. Then get the right of your column using this index.
SELECT RIGHT(Filename,PATINDEX('%\%',REVERSE(Filename))-1)
If you want to turn File_1.70837292036d41139fcf8fa6b4997d3c.pdf to File_1.pdf then you could try the following, though it might look uggly:
SELECT
LEFT
(
RIGHT
(
Filepath,
CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0
THEN PATINDEX('%\%',REVERSE(Filepath))-1
ELSE LEN(Filepath)
END
),
CASE WHEN
PATINDEX
(
'%.%',
RIGHT
(
Filepath,
CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0
THEN PATINDEX('%\%',REVERSE(Filepath))-1
ELSE LEN(Filepath)
END
)
)>0
THEN
PATINDEX
(
'%.%',
RIGHT
(
Filepath,
CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0
THEN PATINDEX('%\%',REVERSE(Filepath))-1
ELSE LEN(Filepath)
END
)
)-1
ELSE 0 END
)
+
RIGHT
(
Filepath,
CASE WHEN PATINDEX('%.%',REVERSE(Filepath)) > 0
THEN PATINDEX('%.%',REVERSE(Filepath))
ELSE LEN(Filepath)
END
)
qid & accept id:
(18644056, 18644112)
query:
multiple count conditions with single query
soup:
If you want to get number of students who got A in History in one column, number of students who got B in Maths in second column and number of students who got E in Geography in third then:
\nselect\n sum(case when [History] = 'A' then 1 else 0 end) as HistoryA,\n sum(case when [Maths] = 'B' then 1 else 0 end) as MathsB,\n sum(case when [Geography] = 'E' then 1 else 0 end) as GeographyC\nfrom Table1\n
\nIf you want to count students who got A in history, B in maths and E in Geography:
\nselect count(*)\nfrom Table1\nwhere [History] = 'A' and [Maths] = 'B' and [Geography] = 'E'\n
\n
soup wrap:
If you want to get number of students who got A in History in one column, number of students who got B in Maths in second column and number of students who got E in Geography in third then:
select
sum(case when [History] = 'A' then 1 else 0 end) as HistoryA,
sum(case when [Maths] = 'B' then 1 else 0 end) as MathsB,
sum(case when [Geography] = 'E' then 1 else 0 end) as GeographyC
from Table1
If you want to count students who got A in history, B in maths and E in Geography:
select count(*)
from Table1
where [History] = 'A' and [Maths] = 'B' and [Geography] = 'E'
qid & accept id:
(18651768, 18652023)
query:
How to select data from another sql server server tables in sql script?
soup:
You can indeed use the
\nOPENDATASOURCE\n
\nor
\nOPENROWSET\n
\nNote that you have to turn on the ad hoc distributed queries option:
\nsp_configure 'show advanced options', 1;\nRECONFIGURE;\nsp_configure 'Ad Hoc Distributed Queries', 1;\nRECONFIGURE;\nGO\n
\n
soup wrap:
You can indeed use the
OPENDATASOURCE
or
OPENROWSET
Note that you have to turn on the ad hoc distributed queries option:
sp_configure 'show advanced options', 1;
RECONFIGURE;
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO
qid & accept id:
(18669731, 18669821)
query:
Keyword search using query
soup:
First, the answer is no, but if you'll change it to:
\nSELECT * FROM keywords WHERE column_name LIKE '%?%'\n
\nit should work.
\nSecond, it's not clear from your question how is the table constructed. If it's something like:
\n -----------------------------------------------------\n|column1 |column2 |column3 |column4 |column5 |column6 |\n -----------------------------------------------------\n|blablaa1|blablaa2|blablaa3|blablaa4|blabla?5|blablaa6|\n -----------------------------------------------------\n...\n
\nthen the answer I wrote in before won't work and the design is not good and should be replaced with one keyword per row. Another approach would be to query the table as follows:
\nSELECT * FROM keywords WHERE column1 LIKE '%?%' OR \ncolumn2 LIKE '%?%' OR \ncolumn3 LIKE '%?%' OR \n...\n
\nbut, as I just mentioned, this is NOT a good way to construct your table and you'd better think how to re-design it for better performance & maintenance.
\n
soup wrap:
First, the answer is no, but if you'll change it to:
SELECT * FROM keywords WHERE column_name LIKE '%?%'
it should work.
Second, it's not clear from your question how is the table constructed. If it's something like:
-----------------------------------------------------
|column1 |column2 |column3 |column4 |column5 |column6 |
-----------------------------------------------------
|blablaa1|blablaa2|blablaa3|blablaa4|blabla?5|blablaa6|
-----------------------------------------------------
...
then the answer I wrote in before won't work and the design is not good and should be replaced with one keyword per row. Another approach would be to query the table as follows:
SELECT * FROM keywords WHERE column1 LIKE '%?%' OR
column2 LIKE '%?%' OR
column3 LIKE '%?%' OR
...
but, as I just mentioned, this is NOT a good way to construct your table and you'd better think how to re-design it for better performance & maintenance.
qid & accept id:
(18708680, 19060795)
query:
Efficiently joining/merging based on matching part of a string
soup:
This is a partial answer that makes it run 4-5X faster, but it isn't ideal (it helps in my case, but wouldn't necessarily work in the general case of optimizing a Cartesian product join).
\nI originally had 4 separate index() statements like in my example (my simplified sample had 2 for A.first and A.last).
\nI was able to refactor all 4 of those index() statements (plus a 5th I was going to add) into a regular expression that solves the same problem. It won't return an identical result set, but I think it actually returns better results than the 5 separate indexes since you can specify word edges.
\nIn the datastep where I clean the names for matching, I create the following pattern:
\npattern = cats('/\b(',substr(upcase(first_name),1,1),'|',upcase(first_name),').?\s?',upcase(last_name),'\b/');\n
\nThis should create a regex along the lines of /\b(F|FIRST).?\s?LAST\b/ which will match anything like F. Last, First Last, flast@email.com, etc (there are combinations that it doesn't pick up, but I was only concerned with combinations that I observe in my data). Using '\b' also doesn't allow things where FLAST happens to be the same as the start/end of a word (such as "Edward Lo" getting matched to "Eloquent") which I find hard to avoid with index()
\nThen I do my sql join like this:
\nproc sql noprint;\ncreate table matched as\n select B.*, \n prxparse(B.pattern) as prxm, \n A.* \n from search_text as A,\n search_names as B\n where prxmatch(calculated prxm,A.notes)\n order by A.id;\nquit;\nrun;\n
\nBeing able to compile the regex once per name in B, and then run it on each piece of text in A seems to be dramatically faster than a couple of index statements (not sure about the case of a regex vs a single index).
\nRunning it with A=250,000 Obs and B=4,000 Obs, took something like 90 minutes of CPU time for the index() method, while doing the same with prxmatch() took only 20 minutes of CPU time.
\n
soup wrap:
This is a partial answer that makes it run 4-5X faster, but it isn't ideal (it helps in my case, but wouldn't necessarily work in the general case of optimizing a Cartesian product join).
I originally had 4 separate index() statements like in my example (my simplified sample had 2 for A.first and A.last).
I was able to refactor all 4 of those index() statements (plus a 5th I was going to add) into a regular expression that solves the same problem. It won't return an identical result set, but I think it actually returns better results than the 5 separate indexes since you can specify word edges.
In the datastep where I clean the names for matching, I create the following pattern:
pattern = cats('/\b(',substr(upcase(first_name),1,1),'|',upcase(first_name),').?\s?',upcase(last_name),'\b/');
This should create a regex along the lines of /\b(F|FIRST).?\s?LAST\b/ which will match anything like F. Last, First Last, flast@email.com, etc (there are combinations that it doesn't pick up, but I was only concerned with combinations that I observe in my data). Using '\b' also doesn't allow things where FLAST happens to be the same as the start/end of a word (such as "Edward Lo" getting matched to "Eloquent") which I find hard to avoid with index()
Then I do my sql join like this:
proc sql noprint;
create table matched as
select B.*,
prxparse(B.pattern) as prxm,
A.*
from search_text as A,
search_names as B
where prxmatch(calculated prxm,A.notes)
order by A.id;
quit;
run;
Being able to compile the regex once per name in B, and then run it on each piece of text in A seems to be dramatically faster than a couple of index statements (not sure about the case of a regex vs a single index).
Running it with A=250,000 Obs and B=4,000 Obs, took something like 90 minutes of CPU time for the index() method, while doing the same with prxmatch() took only 20 minutes of CPU time.
qid & accept id:
(18724492, 18724569)
query:
Deleting database existing record while asigning values from one row to other with unique values
soup:
This can only be done in multiple steps (i.e. not a single UPDATE statement) in MySQL, because of the following points
\nPoint 1: To get a list of rows that do not have the same pid as other rows, you would need to do a query before your update. For example:
\nSELECT id FROM `order` \nWHERE pid NOT IN (\n SELECT pid FROM `order`\n GROUP BY pid\n HAVING COUNT(*) > 1\n)\n
\nThat'll give you the list of IDs that don't share a pid with other rows. However we have to deal with Point 2, from http://dev.mysql.com/doc/refman/5.6/en/subquery-restrictions.html:
\n\nIn general, you cannot modify a table and select from the same table in a subquery.
\n
\nThat means you can't use such a subquery in your UPDATE statement. You're going to have to use a staging table to store the pids and UPDATE based on that set.
\nFor example, the following code creates a temporary table called badpids that contains all pids that appear multiple times in the orders table. Then, we execute the UPDATE, but only for rows that don't have a pid in the list of badpids:
\nCREATE TEMPORARY TABLE badpids (pid int);\n\nINSERT INTO badpids\n SELECT pid FROM `order`\n GROUP BY pid\n HAVING COUNT(*) > 1;\n\nUPDATE `order` SET cid = 1\nWHERE cid= 2 \nAND pid NOT IN (SELECT pid FROM badpids);\n
\n
soup wrap:
This can only be done in multiple steps (i.e. not a single UPDATE statement) in MySQL, because of the following points
Point 1: To get a list of rows that do not have the same pid as other rows, you would need to do a query before your update. For example:
SELECT id FROM `order`
WHERE pid NOT IN (
SELECT pid FROM `order`
GROUP BY pid
HAVING COUNT(*) > 1
)
That'll give you the list of IDs that don't share a pid with other rows. However we have to deal with Point 2, from http://dev.mysql.com/doc/refman/5.6/en/subquery-restrictions.html:
In general, you cannot modify a table and select from the same table in a subquery.
That means you can't use such a subquery in your UPDATE statement. You're going to have to use a staging table to store the pids and UPDATE based on that set.
For example, the following code creates a temporary table called badpids that contains all pids that appear multiple times in the orders table. Then, we execute the UPDATE, but only for rows that don't have a pid in the list of badpids:
CREATE TEMPORARY TABLE badpids (pid int);
INSERT INTO badpids
SELECT pid FROM `order`
GROUP BY pid
HAVING COUNT(*) > 1;
UPDATE `order` SET cid = 1
WHERE cid= 2
AND pid NOT IN (SELECT pid FROM badpids);
qid & accept id:
(18737626, 18738555)
query:
SQL: selecting things ONLY associated with one value
soup:
I hope I understand your question correctly, you want a list of all fruits (with the same name/title) returned, only if there is only one kind of color for that , otherwise you want none in your results.
\nThis looks a bit dirty using a subquery but is the best I could come up with in short time:
\nusing this table structure:
\nCREATE TABLE Fruits (Id INT PRIMARY KEY auto_increment, Title VARCHAR(63), Colour VARCHAR(63));\n\nINSERT INTO Fruits (Title, Colour)\n SELECT 'Apple', 'Green'\n UNION ALL\n SELECT 'Apple', 'Green'\n UNION ALL\n SELECT 'Apple', 'Blue'\n UNION\n SELECT 'Orange', 'Yellow'\n UNION ALL\n SELECT 'Orange', 'Yellow';\n
\nYou can perform this query
\nSELECT\n Id\n FROM Fruits AS OuterFruits\n WHERE\n Title = 'Orange'\n AND\n (\n SELECT\n COUNT(Colour)\n FROM Fruits AS InnerFruits\n WHERE\n InnerFruits.Colour != OuterFruits.Colour\n AND InnerFruits.Title = OuterFruits.Title\n ) = 0;\n
\nThis will give the rows of the two oranges inserted, if you however where to replace 'Orange' with 'Apple' in that last query you would get an empty result set, because there are different colours of apples available.
\nYou can try that online in this fiddle also.
\nPlease note that this is mysql-syntax (since you did not include any special sql version, but I'm pretty sure only auto_increment is mysql-specific)
\n
soup wrap:
I hope I understand your question correctly, you want a list of all fruits (with the same name/title) returned, only if there is only one kind of color for that , otherwise you want none in your results.
This looks a bit dirty using a subquery but is the best I could come up with in short time:
using this table structure:
CREATE TABLE Fruits (Id INT PRIMARY KEY auto_increment, Title VARCHAR(63), Colour VARCHAR(63));
INSERT INTO Fruits (Title, Colour)
SELECT 'Apple', 'Green'
UNION ALL
SELECT 'Apple', 'Green'
UNION ALL
SELECT 'Apple', 'Blue'
UNION
SELECT 'Orange', 'Yellow'
UNION ALL
SELECT 'Orange', 'Yellow';
You can perform this query
SELECT
Id
FROM Fruits AS OuterFruits
WHERE
Title = 'Orange'
AND
(
SELECT
COUNT(Colour)
FROM Fruits AS InnerFruits
WHERE
InnerFruits.Colour != OuterFruits.Colour
AND InnerFruits.Title = OuterFruits.Title
) = 0;
This will give the rows of the two oranges inserted, if you however where to replace 'Orange' with 'Apple' in that last query you would get an empty result set, because there are different colours of apples available.
You can try that online in this fiddle also.
Please note that this is mysql-syntax (since you did not include any special sql version, but I'm pretty sure only auto_increment is mysql-specific)
qid & accept id:
(18747853, 18748008)
query:
mySQL SELECT upcoming birthdays
soup:
To get all birthdays in next 7 days, add the year difference between the date of birth and today to the date of birth and then find if it falls within next seven days.
\nSELECT * \nFROM persons \nWHERE DATE_ADD(birthday, \n INTERVAL YEAR(CURDATE())-YEAR(birthday)\n + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n YEAR) \n BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);\n
\nIf you want to exclude today's birthdays just change > to >=
\nSELECT * \nFROM persons \nWHERE DATE_ADD(birthday, \n INTERVAL YEAR(CURDATE())-YEAR(birthday)\n + IF(DAYOFYEAR(CURDATE()) >= DAYOFYEAR(birthday),1,0)\n YEAR) \n BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);\n\n-- Same as above query with another way to exclude today's birthdays \nSELECT * \nFROM persons \nWHERE DATE_ADD(birthday, \n INTERVAL YEAR(CURDATE())-YEAR(birthday)\n + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n YEAR) \n BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)\n AND DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) YEAR) <> CURDATE();\n\n\n-- Same as above query with another way to exclude today's birthdays \nSELECT * \nFROM persons \nWHERE DATE_ADD(birthday, \n INTERVAL YEAR(CURDATE())-YEAR(birthday)\n + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n YEAR) \n BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)\n AND (MONTH(birthday) <> MONTH(CURDATE()) OR DAY(birthday) <> DAY(CURDATE()));\n
\nHere is a DEMO of all queries
\n
soup wrap:
To get all birthdays in next 7 days, add the year difference between the date of birth and today to the date of birth and then find if it falls within next seven days.
SELECT *
FROM persons
WHERE DATE_ADD(birthday,
INTERVAL YEAR(CURDATE())-YEAR(birthday)
+ IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
YEAR)
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);
If you want to exclude today's birthdays just change > to >=
SELECT *
FROM persons
WHERE DATE_ADD(birthday,
INTERVAL YEAR(CURDATE())-YEAR(birthday)
+ IF(DAYOFYEAR(CURDATE()) >= DAYOFYEAR(birthday),1,0)
YEAR)
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);
-- Same as above query with another way to exclude today's birthdays
SELECT *
FROM persons
WHERE DATE_ADD(birthday,
INTERVAL YEAR(CURDATE())-YEAR(birthday)
+ IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
YEAR)
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)
AND DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) YEAR) <> CURDATE();
-- Same as above query with another way to exclude today's birthdays
SELECT *
FROM persons
WHERE DATE_ADD(birthday,
INTERVAL YEAR(CURDATE())-YEAR(birthday)
+ IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
YEAR)
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)
AND (MONTH(birthday) <> MONTH(CURDATE()) OR DAY(birthday) <> DAY(CURDATE()));
Here is a DEMO of all queries
qid & accept id:
(18749306, 18749534)
query:
Create table and get data from another table
soup:
Try this
\n--create table without realization column\nCREATE TABLE [dbo].[CostCategory](\n[ID_CostCategory] [int] NOT NULL,\n[Name] [varchar](150) NOT NULL,\n[Plan] [money] NOT NULL\n) go\n\nCREATE TABLE [dbo].[Cost](\n[ID_Cost] [int] NOT NULL,\n[Name] [varchar](50) NULL,\n[ID_CostCategory] [int] NULL,\n[ID_Department] [int] NULL,\n[ID_Project] [int] NULL,\n[Value] [money] NULL,\n\n) go \n
\nCreate a UDF to calculate sum of the cost column:
\nCREATE FUNCTION [dbo].[CalculateRealization](@Id INT) \nRETURNS money\nAS \nBEGIN\n DECLARE @cost money\n\n SELECT @cost = SUM(Value)\n FROM [dbo].[Cost]\n WHERE [ID_CostCategory] = @ID\n\n return @cost\nEND\n
\nNow Alter your CostCategory table to add computed column:
\nALTER TABLE [dbo].[CostCategory]\n ADD [Realization] AS dbo.CalculateRealization(ID_CostCategory);\n
\nNow you can select Realization from Costcategory
\nSELECT ID_CostCategory, Realization\nFROM [dbo].[CostCategory]\n
\nAnswer to your comment below:
\nCreate Another UDF
\nCREATE FUNCTION [dbo].[CheckValue](@Id INT, @value Money) \nRETURNS INT\nAS \nBEGIN\n DECLARE @flg INT\n SELECT @flg = CASE WHEN [Plan] >= @value THEN 1 ELSE 0 END\n FROM [dbo].[CostCategory]\n WHERE [ID_CostCategory] = @ID\n\n return @flg;\nEND\n
\nNow add Constraint on Cost Table:
\nALTER TABLE ALTER TABLE [dbo].[Cost]\n ADD CONSTRAINT CHK_VAL_PLAN_COSTCATG\n CHECK(dbo.CheckValue(ID_CostCategory, Value) = 1)\n
\n
soup wrap:
Try this
--create table without realization column
CREATE TABLE [dbo].[CostCategory](
[ID_CostCategory] [int] NOT NULL,
[Name] [varchar](150) NOT NULL,
[Plan] [money] NOT NULL
) go
CREATE TABLE [dbo].[Cost](
[ID_Cost] [int] NOT NULL,
[Name] [varchar](50) NULL,
[ID_CostCategory] [int] NULL,
[ID_Department] [int] NULL,
[ID_Project] [int] NULL,
[Value] [money] NULL,
) go
Create a UDF to calculate sum of the cost column:
CREATE FUNCTION [dbo].[CalculateRealization](@Id INT)
RETURNS money
AS
BEGIN
DECLARE @cost money
SELECT @cost = SUM(Value)
FROM [dbo].[Cost]
WHERE [ID_CostCategory] = @ID
return @cost
END
Now Alter your CostCategory table to add computed column:
ALTER TABLE [dbo].[CostCategory]
ADD [Realization] AS dbo.CalculateRealization(ID_CostCategory);
Now you can select Realization from Costcategory
SELECT ID_CostCategory, Realization
FROM [dbo].[CostCategory]
Answer to your comment below:
Create Another UDF
CREATE FUNCTION [dbo].[CheckValue](@Id INT, @value Money)
RETURNS INT
AS
BEGIN
DECLARE @flg INT
SELECT @flg = CASE WHEN [Plan] >= @value THEN 1 ELSE 0 END
FROM [dbo].[CostCategory]
WHERE [ID_CostCategory] = @ID
return @flg;
END
Now add Constraint on Cost Table:
ALTER TABLE ALTER TABLE [dbo].[Cost]
ADD CONSTRAINT CHK_VAL_PLAN_COSTCATG
CHECK(dbo.CheckValue(ID_CostCategory, Value) = 1)
qid & accept id:
(18757944, 18758087)
query:
How change non nullable column to nullable column
soup:
If you just want "fake" the value of a column in a result set, try
\nselect id, name, NULL as [date] from samp\n
\nIf you want to change the underlying data, do
\nUPDATE samp set [date] = NULL\n
\n
soup wrap:
If you just want "fake" the value of a column in a result set, try
select id, name, NULL as [date] from samp
If you want to change the underlying data, do
UPDATE samp set [date] = NULL
qid & accept id:
(18764988, 18765166)
query:
SQL Query Comparing Date
soup:
If you are using MS SQL Server try this code:
\nSELECT tb.date_added\n FROM MyTable tb\n WHERE tb.date_added > DATEADD(week, -2, GETDATE())\n
\nFor MySQL try:
\nSELECT tb.date_added\n FROM MyTable tb\n WHERE DATE_ADD(tb.date_added, INTERVAL 2 WEEK) >= NOW();\n
\n
soup wrap:
If you are using MS SQL Server try this code:
SELECT tb.date_added
FROM MyTable tb
WHERE tb.date_added > DATEADD(week, -2, GETDATE())
For MySQL try:
SELECT tb.date_added
FROM MyTable tb
WHERE DATE_ADD(tb.date_added, INTERVAL 2 WEEK) >= NOW();
qid & accept id:
(18777437, 18777827)
query:
SQL: Linking Multiple Rows in Table Based on Data Chain in Select
soup:
SELECT * FROM LinkedTable lt\nWHERE ft.link_sequence IN \n ( SELECT link_sequence FROM LinkedTable WHERE code = 3245 AND link_sequence IS NOT NULL ) \nORDER BY ft.ID;\n
\nSee my SQL Fiddle DEMO.
\nSECOND ATTEMPT:
\nSELECT DISTINCT * \nFROM LinkedTable\nSTART WITH code = 3245\nCONNECT BY NOCYCLE\n PRIOR code = code AND PRIOR link_sequence+1 = link_sequence OR\n PRIOR code <> code AND PRIOR link_sequence = link_sequence\nORDER BY link_sequence, code\n;\n
\nUpdated SQL Fiddle with this code. Please try to break it.
\nBased on your data (starting with 3245) it gives the following chain:
\nID CODE LINK_SEQUENCE NAME\n2 3245 1 Potato\n1 3267 1 Potato\n3 3245 2 Potato\n4 3975 2 Potato\n5 3975 3 Potato\n6 5478 3 Potato\n
\n
soup wrap:
SELECT * FROM LinkedTable lt
WHERE ft.link_sequence IN
( SELECT link_sequence FROM LinkedTable WHERE code = 3245 AND link_sequence IS NOT NULL )
ORDER BY ft.ID;
See my SQL Fiddle DEMO.
SECOND ATTEMPT:
SELECT DISTINCT *
FROM LinkedTable
START WITH code = 3245
CONNECT BY NOCYCLE
PRIOR code = code AND PRIOR link_sequence+1 = link_sequence OR
PRIOR code <> code AND PRIOR link_sequence = link_sequence
ORDER BY link_sequence, code
;
Updated SQL Fiddle with this code. Please try to break it.
Based on your data (starting with 3245) it gives the following chain:
ID CODE LINK_SEQUENCE NAME
2 3245 1 Potato
1 3267 1 Potato
3 3245 2 Potato
4 3975 2 Potato
5 3975 3 Potato
6 5478 3 Potato
qid & accept id:
(18778492, 18780465)
query:
MS Access Alter Statement: change column data type to DATETIME
soup:
Try running these:
\nALTER TABLE table1 ADD NewDate DATE\n
\nThen run
\nUPDATE table1\nSET NewDate = RecordTime\nWHERE RIGHT(RecordTime,4) <> '- ::'\n
\nYou can then delete the RecordTime and rename NewDate.
\nI prefer adding a new column just in case there are any issues with the UPDATE and you can compare the 'cleaned' column and the initial data before proceeding.
\n
soup wrap:
Try running these:
ALTER TABLE table1 ADD NewDate DATE
Then run
UPDATE table1
SET NewDate = RecordTime
WHERE RIGHT(RecordTime,4) <> '- ::'
You can then delete the RecordTime and rename NewDate.
I prefer adding a new column just in case there are any issues with the UPDATE and you can compare the 'cleaned' column and the initial data before proceeding.
qid & accept id:
(18799810, 18800637)
query:
function with multiple where
soup:
First, I would use an inline UDF instead of scalar function for performance reasons.
\nSecond, there are two options:
\n1) A function that shows total for every department
\nCREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT) \nRETURNS TABLE\nAS \nRETURN\n SELECT [ID_Department], SUM(Value) AS koszt\n FROM [dbo].[Cost]\n WHERE [ID_CostCategory] = @pID_CostCategory\n GROUP BY[ID_Department];\nGO \n
\nor
\n2) A function which has two parameters, the second parameter being optional
\nCREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT, @pID_Department INT=NULL) \nRETURNS TABLE\nAS \nRETURN\n SELECT SUM(Value) AS koszt\n FROM [dbo].[Cost]\n WHERE [ID_CostCategory] = @pID_CostCategory\n AND ([ID_Department] = @pID_Department OR @pID_Department IS NULL)\nGO\n
\n
soup wrap:
First, I would use an inline UDF instead of scalar function for performance reasons.
Second, there are two options:
1) A function that shows total for every department
CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT)
RETURNS TABLE
AS
RETURN
SELECT [ID_Department], SUM(Value) AS koszt
FROM [dbo].[Cost]
WHERE [ID_CostCategory] = @pID_CostCategory
GROUP BY[ID_Department];
GO
or
2) A function which has two parameters, the second parameter being optional
CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT, @pID_Department INT=NULL)
RETURNS TABLE
AS
RETURN
SELECT SUM(Value) AS koszt
FROM [dbo].[Cost]
WHERE [ID_CostCategory] = @pID_CostCategory
AND ([ID_Department] = @pID_Department OR @pID_Department IS NULL)
GO
qid & accept id:
(18852505, 18853189)
query:
Join distant SQL tables without pulling data in between
soup:
Use DISTINCT to count the distinct Box.id in your query -
\nSELECT \n Box.expected_delivery_date, count(DISTINCT Box.id) num_boxes\nFROM\n Box\n JOIN\n Subscription ON Box.subscription_id = Subscription.id\n JOIN\n BoxContent ON Subscription.id = BoxContent.subscription_id\n JOIN\n Schedule ON Schedule.id = BoxContent.schedule_id\nWHERE\n Box.state = 3 AND Box.status = 2\nGROUP BY Box.expected_delivery_date;\n
\nThis should return
\n2010-10-01 - 2
\n2010-10-07 - 4
\nSimilarly, when you JOIN box with subscription, content, schedule tables you will get many duplicates. You need to analyze the data and see how you need to GROUP BY.
\nUse this query to see the actual data used by the query before grouping and decide on which columns to group by. Mostly, it will be the columns where you see duplicate data in multiple rows.
\nSELECT \n Box.expected_delivery_date, Box.id BoxID, Schedule.id SchID\nFROM\n Box\n JOIN\n Subscription ON Box.subscription_id = Subscription.id\n JOIN\n BoxContent ON Subscription.id = BoxContent.subscription_id\n JOIN\n Schedule ON Schedule.id = BoxContent.schedule_id\nWHERE\n Box.state = 3 AND Box.status = 2\n
\nYou may even try SELECT Box.*, Schedule.* in above query to come up with a final grouping.
\nIf you need any more specific answer, you will have to provide the dummy data for all those table and the result you are looking for.
\n
soup wrap:
Use DISTINCT to count the distinct Box.id in your query -
SELECT
Box.expected_delivery_date, count(DISTINCT Box.id) num_boxes
FROM
Box
JOIN
Subscription ON Box.subscription_id = Subscription.id
JOIN
BoxContent ON Subscription.id = BoxContent.subscription_id
JOIN
Schedule ON Schedule.id = BoxContent.schedule_id
WHERE
Box.state = 3 AND Box.status = 2
GROUP BY Box.expected_delivery_date;
This should return
2010-10-01 - 2
2010-10-07 - 4
Similarly, when you JOIN box with subscription, content, schedule tables you will get many duplicates. You need to analyze the data and see how you need to GROUP BY.
Use this query to see the actual data used by the query before grouping and decide on which columns to group by. Mostly, it will be the columns where you see duplicate data in multiple rows.
SELECT
Box.expected_delivery_date, Box.id BoxID, Schedule.id SchID
FROM
Box
JOIN
Subscription ON Box.subscription_id = Subscription.id
JOIN
BoxContent ON Subscription.id = BoxContent.subscription_id
JOIN
Schedule ON Schedule.id = BoxContent.schedule_id
WHERE
Box.state = 3 AND Box.status = 2
You may even try SELECT Box.*, Schedule.* in above query to come up with a final grouping.
If you need any more specific answer, you will have to provide the dummy data for all those table and the result you are looking for.
qid & accept id:
(18858779, 18859176)
query:
T-SQL "Dynamic" Join
soup:
This SQL will compute the permutations without repetitions:
\nWITH recurse(Result, Depth) AS\n(\n SELECT CAST(Value AS VarChar(100)), 1\n FROM MyTable\n\n UNION ALL\n\n SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1\n FROM MyTable a\n INNER JOIN recurse r\n ON CHARINDEX(a.Value, r.Result) = 0\n)\n\nSELECT Result\nFROM recurse\nWHERE Depth = (SELECT COUNT(*) FROM MyTable)\nORDER BY Result\n
\nIf MyTable contains 9 rows, it will take some time to compute, but it will return 362,880 rows.
\nUpdate with explanation:
\nThe WITH statement is used to define a recursive common table expression. In effect, the WITH statement is looping multiple times performing a UNION until the recursion is finished.
\nThe first part of SQL sets the starting records. Assuming 3 rows named 'A', 'B', and 'C' in MyTable, this will generate these rows:
\n Result Depth\n ------ -----\n A 1\n B 1\n C 1\n
\nThen the next block of SQL performs the first level of recursion:
\n SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1\n FROM MyTable a\n INNER JOIN recurse r\n ON CHARINDEX(a.Value, r.Result) = 0\n
\nThis takes all of the records generated so far (which will be in the recurse table) and joins them to all of the records in MyTable again. The ON clause filters the list of records in MyTable to only return the ones that do not exist already in this row's permutation. This would result in these rows:
\n Result Depth\n ------ -----\n A 1\n B 1\n C 1\n A+B 2\n A+C 2\n B+A 2\n B+C 2\n C+A 2\n C+B 2\n
\nThen the recursion loops again giving these rows:
\n Result Depth\n ------ -----\n A 1\n B 1\n C 1\n A+B 2\n A+C 2\n B+A 2\n B+C 2\n C+A 2\n C+B 2\n A+B+C 3\n A+C+B 3\n B+A+C 3\n B+C+A 3\n C+A+B 3\n C+B+A 3\n
\nAt this point, the recursion stops because the UNION does not create any more rows because the CHARINDEX will always be 0.
\nThe last SQL filters all of the resulting rows where the computed Depth column matches the # of records in MyTable. This throws out all of the rows except for the ones generated by the last depth of recursion. So the final result will be these rows:
\n Result\n ------\n A+B+C\n A+C+B\n B+A+C\n B+C+A\n C+A+B\n C+B+A\n
\n
soup wrap:
This SQL will compute the permutations without repetitions:
WITH recurse(Result, Depth) AS
(
SELECT CAST(Value AS VarChar(100)), 1
FROM MyTable
UNION ALL
SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
FROM MyTable a
INNER JOIN recurse r
ON CHARINDEX(a.Value, r.Result) = 0
)
SELECT Result
FROM recurse
WHERE Depth = (SELECT COUNT(*) FROM MyTable)
ORDER BY Result
If MyTable contains 9 rows, it will take some time to compute, but it will return 362,880 rows.
Update with explanation:
The WITH statement is used to define a recursive common table expression. In effect, the WITH statement is looping multiple times performing a UNION until the recursion is finished.
The first part of SQL sets the starting records. Assuming 3 rows named 'A', 'B', and 'C' in MyTable, this will generate these rows:
Result Depth
------ -----
A 1
B 1
C 1
Then the next block of SQL performs the first level of recursion:
SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
FROM MyTable a
INNER JOIN recurse r
ON CHARINDEX(a.Value, r.Result) = 0
This takes all of the records generated so far (which will be in the recurse table) and joins them to all of the records in MyTable again. The ON clause filters the list of records in MyTable to only return the ones that do not exist already in this row's permutation. This would result in these rows:
Result Depth
------ -----
A 1
B 1
C 1
A+B 2
A+C 2
B+A 2
B+C 2
C+A 2
C+B 2
Then the recursion loops again giving these rows:
Result Depth
------ -----
A 1
B 1
C 1
A+B 2
A+C 2
B+A 2
B+C 2
C+A 2
C+B 2
A+B+C 3
A+C+B 3
B+A+C 3
B+C+A 3
C+A+B 3
C+B+A 3
At this point, the recursion stops because the UNION does not create any more rows because the CHARINDEX will always be 0.
The last SQL filters all of the resulting rows where the computed Depth column matches the # of records in MyTable. This throws out all of the rows except for the ones generated by the last depth of recursion. So the final result will be these rows:
Result
------
A+B+C
A+C+B
B+A+C
B+C+A
C+A+B
C+B+A
qid & accept id:
(18865590, 18865714)
query:
Applying multiple condition on a column
soup:
try it with the following for your Results in one row:
\nSELECT\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'search' or task = 'Basic' or task = 'natural search') AS CountSearch,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'Query1' or task = 'Query2' or task = 'Query3') AS CountQuery,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'sample1' or task = 'sample2') AS CountSample,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'test1' or task = 'test2' or task = 'test3') AS CountTest\n
\nAnd the following for your results in several rows:
\nSELECT 'CountSearch', COUNT(*)\nFROM Table\nWHERE task = 'search' or task = 'Basic' or task = 'natural search'\nUNION ALL\nSELECT 'CountQuery', COUNT(*)\nFROM Table\nWHERE task = 'Query1' or task = 'Query2' or task = 'Query3'\nUNION ALL\nSELECT 'CountSample', COUNT(*)\nFROM Table\nWHERE task = 'sample1' or task = 'sample2'\nUNION ALL\nSELECT 'CountTest', COUNT(*)\nFROM Table\nWHERE task = 'test1' or task = 'test2' or task = 'test3'\n
\nI renamed your columns, because you can't use brackets as columnname in a sql-statement.
\n
soup wrap:
try it with the following for your Results in one row:
SELECT
(SELECT COUNT(*)
FROM Table
WHERE task = 'search' or task = 'Basic' or task = 'natural search') AS CountSearch,
(SELECT COUNT(*)
FROM Table
WHERE task = 'Query1' or task = 'Query2' or task = 'Query3') AS CountQuery,
(SELECT COUNT(*)
FROM Table
WHERE task = 'sample1' or task = 'sample2') AS CountSample,
(SELECT COUNT(*)
FROM Table
WHERE task = 'test1' or task = 'test2' or task = 'test3') AS CountTest
And the following for your results in several rows:
SELECT 'CountSearch', COUNT(*)
FROM Table
WHERE task = 'search' or task = 'Basic' or task = 'natural search'
UNION ALL
SELECT 'CountQuery', COUNT(*)
FROM Table
WHERE task = 'Query1' or task = 'Query2' or task = 'Query3'
UNION ALL
SELECT 'CountSample', COUNT(*)
FROM Table
WHERE task = 'sample1' or task = 'sample2'
UNION ALL
SELECT 'CountTest', COUNT(*)
FROM Table
WHERE task = 'test1' or task = 'test2' or task = 'test3'
I renamed your columns, because you can't use brackets as columnname in a sql-statement.
qid & accept id:
(18873251, 18877065)
query:
Is it possible to reference columns from one common table expression in another, without using joins?
soup:
Her'es a vague outline of how I'd approach this. It makes a lot of assumptions, is missing key components, has not been debugged in any way, and is completely dependent on those queries you have no control over being "good" for hard-to-acertain values of good.
\nAssumption: a set of queries that looks something like this:
\nLevel1Q: select * from users where name=:param_user\nLevel2Q: select * from projects where id=:param_id\nLevel3Q: select * from details where id=:param_id\nLevel4Q: \n
\nSo, for a "level 3" query, you'd want to generate the following:
\n;WITH\n Level1Q as (select * from users where name=:param_user)\n ,Level2Q as (select * from projects where id=:param_id)\n ,Level3Q as (select * from details where id=:param_id)\n select * from Level3Q\n
\nThis, or something much like it, should produce that query:
\nDECLARE\n @Command nvarchar(max)\n ,@Query nvarchar(max)\n ,@Loop int\n ,@MaxDepth int\n ,@CRLF char(2) = char(13) + char(10) -- Makes the dynamic code more legible\n\nSET @Command = 'WITH'\n\n\n-- Set @MaxDepth to the level you want to query at\nSET @MaxDepth = 3\nSET @Loop = 0\n\nWHILE @Loop < @MaxDepth\n BEGIN\n SET @Loop = @Looop + 1\n\n -- Get the query for this level\n SET @Query = \n\n SET @Command = replace(@Command + @CRLF\n + case @Loop when 1 then ' ' else ' ,' end\n + 'Level<<@Loop>>Q as (' + @Query + ')'\n ,':param_user', >Q.id') -- This assumes the link to the prior query is always by a column named "id"\n ,'<<@Loop>>', @Loop) -- Done last, as the prior replace added another <<@Loop>>\n\n END\n\n-- Add the final pull\nSET @Command = @Command + @CRLF + replace(' select * from Level<<@Loop>>Q', '<<@Loop>>', @Loop - 1)\n\n-- The most important command, because debugging this mess will be a pain\nPRINT @Command\n\n--EXECUTE sp_executeSQL @Command \n
\n
soup wrap:
Her'es a vague outline of how I'd approach this. It makes a lot of assumptions, is missing key components, has not been debugged in any way, and is completely dependent on those queries you have no control over being "good" for hard-to-acertain values of good.
Assumption: a set of queries that looks something like this:
Level1Q: select * from users where name=:param_user
Level2Q: select * from projects where id=:param_id
Level3Q: select * from details where id=:param_id
Level4Q:
So, for a "level 3" query, you'd want to generate the following:
;WITH
Level1Q as (select * from users where name=:param_user)
,Level2Q as (select * from projects where id=:param_id)
,Level3Q as (select * from details where id=:param_id)
select * from Level3Q
This, or something much like it, should produce that query:
DECLARE
@Command nvarchar(max)
,@Query nvarchar(max)
,@Loop int
,@MaxDepth int
,@CRLF char(2) = char(13) + char(10) -- Makes the dynamic code more legible
SET @Command = 'WITH'
-- Set @MaxDepth to the level you want to query at
SET @MaxDepth = 3
SET @Loop = 0
WHILE @Loop < @MaxDepth
BEGIN
SET @Loop = @Looop + 1
-- Get the query for this level
SET @Query =
SET @Command = replace(@Command + @CRLF
+ case @Loop when 1 then ' ' else ' ,' end
+ 'Level<<@Loop>>Q as (' + @Query + ')'
,':param_user', >Q.id') -- This assumes the link to the prior query is always by a column named "id"
,'<<@Loop>>', @Loop) -- Done last, as the prior replace added another <<@Loop>>
END
-- Add the final pull
SET @Command = @Command + @CRLF + replace(' select * from Level<<@Loop>>Q', '<<@Loop>>', @Loop - 1)
-- The most important command, because debugging this mess will be a pain
PRINT @Command
--EXECUTE sp_executeSQL @Command
qid & accept id:
(18885583, 18887078)
query:
CTE to build hierarchy from source table
soup:
You can use OUTPUT in combination with Merge to get a Mapping from ID's to new ID's.
\nThe essential part:
\n--this is where you got stuck\nDeclare @MapIds Table (aOldID int,aNewID int)\n\n;MERGE INTO @NewSeed AS TargetTable\nUsing @DefaultSeed as Source on 1=0\nWHEN NOT MATCHED then\n Insert (Code,RequiredID)\n Values\n (Source.Code,Source.RequiredID)\nOUTPUT Source.ID ,inserted.ID into @MapIds;\n\n\nUpdate @NewSeed Set RequiredID=aNewID\nfrom @MapIds\nWhere RequiredID=aOldID\n
\nand the whole example:
\nDECLARE @Table TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);\n\nINSERT INTO @Table (ID, Code, RequiredID) VALUES\n (1, 'Physics', NULL),\n (2, 'Advanced Physics', 1),\n (3, 'Nuke', 2),\n (4, 'Health', NULL); \n\nDECLARE @DefaultSeed TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);\n\nWITH hierarchy \nAS (\n --anchor\n SELECT t.ID , t.Code , t.RequiredID\n FROM @Table AS t\n WHERE t.RequiredID IS NULL\n\n UNION ALL \n\n --recursive\n SELECT t.ID \n , t.Code \n , h.ID \n FROM hierarchy AS h\n JOIN @Table AS t \n ON t.RequiredID = h.ID\n )\n\nINSERT INTO @DefaultSeed (ID, Code, RequiredID)\nSELECT ID \n , Code \n , RequiredID\nFROM hierarchy\nOPTION (MAXRECURSION 10)\n\n\nDECLARE @NewSeed TABLE (ID INT IDENTITY(10, 1), Code NVARCHAR(50), RequiredID INT)\n\nDeclare @MapIds Table (aOldID int,aNewID int)\n\n;MERGE INTO @NewSeed AS TargetTable\nUsing @DefaultSeed as Source on 1=0\nWHEN NOT MATCHED then\n Insert (Code,RequiredID)\n Values\n (Source.Code,Source.RequiredID)\nOUTPUT Source.ID ,inserted.ID into @MapIds;\n\n\nUpdate @NewSeed Set RequiredID=aNewID\nfrom @MapIds\nWhere RequiredID=aOldID\n\n\n/*\n--@NewSeed should read like the following...\n[ID] [Code] [RequiredID]\n10....Physics..........NULL\n11....Health...........NULL\n12....AdvancedPhysics..10\n13....Nuke.............12\n*/\n\nSELECT *\nFROM @NewSeed\n
\n
soup wrap:
You can use OUTPUT in combination with Merge to get a Mapping from ID's to new ID's.
The essential part:
--this is where you got stuck
Declare @MapIds Table (aOldID int,aNewID int)
;MERGE INTO @NewSeed AS TargetTable
Using @DefaultSeed as Source on 1=0
WHEN NOT MATCHED then
Insert (Code,RequiredID)
Values
(Source.Code,Source.RequiredID)
OUTPUT Source.ID ,inserted.ID into @MapIds;
Update @NewSeed Set RequiredID=aNewID
from @MapIds
Where RequiredID=aOldID
and the whole example:
DECLARE @Table TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);
INSERT INTO @Table (ID, Code, RequiredID) VALUES
(1, 'Physics', NULL),
(2, 'Advanced Physics', 1),
(3, 'Nuke', 2),
(4, 'Health', NULL);
DECLARE @DefaultSeed TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);
WITH hierarchy
AS (
--anchor
SELECT t.ID , t.Code , t.RequiredID
FROM @Table AS t
WHERE t.RequiredID IS NULL
UNION ALL
--recursive
SELECT t.ID
, t.Code
, h.ID
FROM hierarchy AS h
JOIN @Table AS t
ON t.RequiredID = h.ID
)
INSERT INTO @DefaultSeed (ID, Code, RequiredID)
SELECT ID
, Code
, RequiredID
FROM hierarchy
OPTION (MAXRECURSION 10)
DECLARE @NewSeed TABLE (ID INT IDENTITY(10, 1), Code NVARCHAR(50), RequiredID INT)
Declare @MapIds Table (aOldID int,aNewID int)
;MERGE INTO @NewSeed AS TargetTable
Using @DefaultSeed as Source on 1=0
WHEN NOT MATCHED then
Insert (Code,RequiredID)
Values
(Source.Code,Source.RequiredID)
OUTPUT Source.ID ,inserted.ID into @MapIds;
Update @NewSeed Set RequiredID=aNewID
from @MapIds
Where RequiredID=aOldID
/*
--@NewSeed should read like the following...
[ID] [Code] [RequiredID]
10....Physics..........NULL
11....Health...........NULL
12....AdvancedPhysics..10
13....Nuke.............12
*/
SELECT *
FROM @NewSeed
qid & accept id:
(18904109, 18904185)
query:
Link one record to multiple records in separate table
soup:
You have a Many-to-Many relationship. Typically this is implemented by adding a table in between the two data tables:
\nPhones -> PhoneCarriers -> Carriers\n
\nPhoneCarrier will look something like:
\nPhoneCarrierID\nPhoneID (FK)\nCarrierID (FK)\n
\nYou won't have a foreign key directly from Phone to Carrier in that scenario.
\n
soup wrap:
You have a Many-to-Many relationship. Typically this is implemented by adding a table in between the two data tables:
Phones -> PhoneCarriers -> Carriers
PhoneCarrier will look something like:
PhoneCarrierID
PhoneID (FK)
CarrierID (FK)
You won't have a foreign key directly from Phone to Carrier in that scenario.
qid & accept id:
(18920393, 18926121)
query:
SQL Server : get next relative day of week. (Next Monday, Tuesday, Wed.....)
soup:
1) Your solution uses a non-deterministic function: datepart(dw...) . Because of this aspect, changing DATEFIRST setting will gives different results. For example, you should try:
\nSET DATEFIRST 7;\nyour solution;\n
\nand then
\nSET DATEFIRST 1;\nyour solution;\n
\n2) Following solution is independent of DATEFIRST/LANGUAGE settings:
\nDECLARE @NextDayID INT = 0 -- 0=Mon, 1=Tue, 2 = Wed, ..., 5=Sat, 6=Sun\nSELECT DATEADD(DAY, (DATEDIFF(DAY, @NextDayID, GETDATE()) / 7) * 7 + 7, @NextDayID) AS NextDay\n
\nResult:
\nNextDay\n-----------------------\n2013-09-23 00:00:00.000\n
\nThis solution is based on following property of DATETIME type:
\n\nDay 0 = 19000101 = Mon
\nDay 1 = 19000102 = Tue
\nDay 2 = 19000103 = Wed
\n
\n...
\n\nDay 5 = 19000106 = Sat
\nDay 6 = 19000107 = Sun
\n
\nSo, converting INT value 0 to DATETIME gives 19000101.
\nIf you want to find the next Wednesday then you should start from day 2 (19000103/Wed), compute days between day 2 and current day (20130921; 41534 days), divide by 7 (in order to get number of full weeks; 5933 weeks), multiple by 7 (41531 fays; in order to get the number of days - full weeks between the first Wednesday/19000103 and the last Wednesday) and then add 7 days (one week; 41538 days; in order to get following Wednesday). Add this number (41538 days) to the starting date: 19000103.
\nNote: my current date is 20130921.
\nEdit #1:
\nDECLARE @NextDayID INT;\nSET @NextDayID = 1; -- Next Sunday\nSELECT DATEADD(DAY, (DATEDIFF(DAY, ((@NextDayID + 5) % 7), GETDATE()) / 7) * 7 + 7, ((@NextDayID + 5) % 7)) AS NextDay\n
\nResult:
\nNextDay\n-----------------------\n2013-09-29 00:00:00.000 \n
\nNote: my current date is 20130923.
\n
soup wrap:
1) Your solution uses a non-deterministic function: datepart(dw...) . Because of this aspect, changing DATEFIRST setting will gives different results. For example, you should try:
SET DATEFIRST 7;
your solution;
and then
SET DATEFIRST 1;
your solution;
2) Following solution is independent of DATEFIRST/LANGUAGE settings:
DECLARE @NextDayID INT = 0 -- 0=Mon, 1=Tue, 2 = Wed, ..., 5=Sat, 6=Sun
SELECT DATEADD(DAY, (DATEDIFF(DAY, @NextDayID, GETDATE()) / 7) * 7 + 7, @NextDayID) AS NextDay
Result:
NextDay
-----------------------
2013-09-23 00:00:00.000
This solution is based on following property of DATETIME type:
Day 0 = 19000101 = Mon
Day 1 = 19000102 = Tue
Day 2 = 19000103 = Wed
...
Day 5 = 19000106 = Sat
Day 6 = 19000107 = Sun
So, converting INT value 0 to DATETIME gives 19000101.
If you want to find the next Wednesday then you should start from day 2 (19000103/Wed), compute days between day 2 and current day (20130921; 41534 days), divide by 7 (in order to get number of full weeks; 5933 weeks), multiple by 7 (41531 fays; in order to get the number of days - full weeks between the first Wednesday/19000103 and the last Wednesday) and then add 7 days (one week; 41538 days; in order to get following Wednesday). Add this number (41538 days) to the starting date: 19000103.
Note: my current date is 20130921.
Edit #1:
DECLARE @NextDayID INT;
SET @NextDayID = 1; -- Next Sunday
SELECT DATEADD(DAY, (DATEDIFF(DAY, ((@NextDayID + 5) % 7), GETDATE()) / 7) * 7 + 7, ((@NextDayID + 5) % 7)) AS NextDay
Result:
NextDay
-----------------------
2013-09-29 00:00:00.000
Note: my current date is 20130923.
qid & accept id:
(18922620, 18924755)
query:
MySQL: SELECT Row Based on Ratio of True to False in Second Table
soup:
Try this
\n select r.mediaid, \n count(*) as total_rows, \n sum(rating) as id_sum,\n SUM(rating)/count(*) AS score\n from rating r, media m\n where r.mediaid=m.mediaid\n group by r.mediaid\n
\nIf you want to report only those records with a score above a threshold such as 0.75\nthen add the 'having' clause
\n select r.mediaid, \n count(*) as total_rows, \n sum(rating) as id_sum,\n SUM(rating)/count(*) AS score\n from rating r, media m\n where r.mediaid=m.mediaid\n group by r.mediaid\n having score > .75 \n
\nHere's a demo SQL Fiddle
\nAfter Comment
\nOne way is to sort by scores desc and then limit to 1 record like this SQL Fiddle#2
\n select r.mediaid, \n count(*) as total_rows, \n sum(rating) as id_sum,\n SUM(rating)/count(*) AS score\nfrom rating r, media m\n where r.mediaid=m.mediaid\n group by r.mediaid\norder by score desc limit 1\n
\n
soup wrap:
Try this
select r.mediaid,
count(*) as total_rows,
sum(rating) as id_sum,
SUM(rating)/count(*) AS score
from rating r, media m
where r.mediaid=m.mediaid
group by r.mediaid
If you want to report only those records with a score above a threshold such as 0.75
then add the 'having' clause
select r.mediaid,
count(*) as total_rows,
sum(rating) as id_sum,
SUM(rating)/count(*) AS score
from rating r, media m
where r.mediaid=m.mediaid
group by r.mediaid
having score > .75
Here's a demo SQL Fiddle
After Comment
One way is to sort by scores desc and then limit to 1 record like this SQL Fiddle#2
select r.mediaid,
count(*) as total_rows,
sum(rating) as id_sum,
SUM(rating)/count(*) AS score
from rating r, media m
where r.mediaid=m.mediaid
group by r.mediaid
order by score desc limit 1
qid & accept id:
(18992088, 18992216)
query:
Order 2 tables by column names
soup:
You can use this to help build the query:
\nSELECT ',' + name \nFROM sys.columns\nWHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))\nORDER BY name\n
\nUpdate: Dynamic SQL version (still have to plop table names in manually):
\nDECLARE @sql VARCHAR(MAX)\n ,@cols VARCHAR(MAX)\nSET @cols = (SELECT STUFF((SELECT ',' + Name\n FROM (SELECT DISTINCT Name\n FROM sys.columns\n WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))\n AND Name <> 'ID'\n )sub\n ORDER BY name\n FOR XML PATH('') \n ), 1, 1, '' ))\nSET @sql = 'SELECT ' +@cols+'\n FROM Table1 a\n JOIN Table2 b\n ON a.ID = b.ID\n '\nEXEC (@sql)\n
\n
soup wrap:
You can use this to help build the query:
SELECT ',' + name
FROM sys.columns
WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))
ORDER BY name
Update: Dynamic SQL version (still have to plop table names in manually):
DECLARE @sql VARCHAR(MAX)
,@cols VARCHAR(MAX)
SET @cols = (SELECT STUFF((SELECT ',' + Name
FROM (SELECT DISTINCT Name
FROM sys.columns
WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))
AND Name <> 'ID'
)sub
ORDER BY name
FOR XML PATH('')
), 1, 1, '' ))
SET @sql = 'SELECT ' +@cols+'
FROM Table1 a
JOIN Table2 b
ON a.ID = b.ID
'
EXEC (@sql)
qid & accept id:
(19005246, 19006246)
query:
tracking customer retension on weekly basis
soup:
I see two ways to do it.\nI would go for an array approach, since it will probably be the fastest (single data step) and is not that complex:
\ndata RESULT (drop=start_week end_week);\n set YOUR_DATA;\n array week_array{62} week0-week61;\n do week=0 to 61;\n if week between start_week and end_week then week_array[week+1]=1;\n else week_array[week+1]=0;\n end;\nrun;\n
\nAlternatively, you can prepare a table for the transpose to work by creating one record per week per id::
\ndata BEFORE_TRANSPOSE (drop=start_week end_week);\n set YOUR_DATA;\n do week=0 to 61;\n if week between start_week and end_week then subscribed=1;\n else subscribed=0;\n output;\n end;\nrun;\n
\n
soup wrap:
I see two ways to do it.
I would go for an array approach, since it will probably be the fastest (single data step) and is not that complex:
data RESULT (drop=start_week end_week);
set YOUR_DATA;
array week_array{62} week0-week61;
do week=0 to 61;
if week between start_week and end_week then week_array[week+1]=1;
else week_array[week+1]=0;
end;
run;
Alternatively, you can prepare a table for the transpose to work by creating one record per week per id::
data BEFORE_TRANSPOSE (drop=start_week end_week);
set YOUR_DATA;
do week=0 to 61;
if week between start_week and end_week then subscribed=1;
else subscribed=0;
output;
end;
run;
qid & accept id:
(19006430, 19007015)
query:
Converting a pivot table to a flat table in SQL
soup:
In order to get the result, you will need to UNPIVOT the data. When you unpivot you convert the multiple columns into multiple rows, in doing so the datatypes of the data must be the same.
\nI would use CROSS APPLY to unpivot the columns in pairs:
\nselect t.employee_id,\n t.employee_name,\n c.data,\n c.old,\n c.new\nfrom yourtable t\ncross apply\n(\n values \n ('Address', Address_Old, Address_new),\n ('Income', cast(income_old as varchar(15)), cast(income_new as varchar(15)))\n) c (data, old, new);\n
\nSee SQL Fiddle with demo. As you can see this uses a cast on the income columns because I am guessing it is a different datatype from the address. Since the final result will have these values in the same column the data must be of the same type.
\nThis can also be written using CROSS APPLY with UNION ALL:
\nselect t.employee_id,\n t.employee_name,\n c.data,\n c.old,\n c.new\nfrom yourtable t\ncross apply\n(\n select 'Address', Address_Old, Address_new union all\n select 'Income', cast(income_old as varchar(15)), cast(income_new as varchar(15))\n) c (data, old, new)\n
\nSee Demo
\n
soup wrap:
In order to get the result, you will need to UNPIVOT the data. When you unpivot you convert the multiple columns into multiple rows, in doing so the datatypes of the data must be the same.
I would use CROSS APPLY to unpivot the columns in pairs:
select t.employee_id,
t.employee_name,
c.data,
c.old,
c.new
from yourtable t
cross apply
(
values
('Address', Address_Old, Address_new),
('Income', cast(income_old as varchar(15)), cast(income_new as varchar(15)))
) c (data, old, new);
See SQL Fiddle with demo. As you can see this uses a cast on the income columns because I am guessing it is a different datatype from the address. Since the final result will have these values in the same column the data must be of the same type.
This can also be written using CROSS APPLY with UNION ALL:
select t.employee_id,
t.employee_name,
c.data,
c.old,
c.new
from yourtable t
cross apply
(
select 'Address', Address_Old, Address_new union all
select 'Income', cast(income_old as varchar(15)), cast(income_new as varchar(15))
) c (data, old, new)
See Demo
qid & accept id:
(19041847, 19042537)
query:
Best way to display number of overspent projects
soup:
You can do it like this
\nCREATE VIEW OverBudgetProjects AS\n SELECT p.department, p.projectid\n FROM project p LEFT JOIN assignment a\n ON p.projectid = a.projectid\n GROUP BY p.department, p.projectid\n HAVING MAX(p.maxhours) < SUM(a.hoursworked);\n\nCREATE VIEW Projects AS\n SELECT DepartmentName, \n COUNT(DISTINCT p.projectid) NumberOfProjects,\n COUNT(DISTINCT o.Projectid) NumberOfOverBudgetProjects,\n OfficeNumber,\n Phone\n FROM department d JOIN project p\n ON d.DepartmentName = p.Department LEFT JOIN OverBudgetProjects o\n ON d.DepartmentName = o.Department\n GROUP BY p.Department;\n
\nSample output from issuing
\nSELECT * FROM Projects\n
\nis
\n\n| DEPARTMENTNAME | NUMBEROFPROJECTS | NUMBEROFOVERBUDGETPROJECTS | OFFICENUMBER | PHONE |\n|----------------|------------------|----------------------------|--------------|--------------|\n| Accounting | 1 | 0 | BLDG01-100 | 360-285-8300 |\n| Finance | 2 | 0 | BLDG01-140 | 360-285-8400 |\n| Marketing | 2 | 2 | BLDG02-200 | 360-287-8700 |\n
\nHere is SQLFiddle demo
\n
soup wrap:
You can do it like this
CREATE VIEW OverBudgetProjects AS
SELECT p.department, p.projectid
FROM project p LEFT JOIN assignment a
ON p.projectid = a.projectid
GROUP BY p.department, p.projectid
HAVING MAX(p.maxhours) < SUM(a.hoursworked);
CREATE VIEW Projects AS
SELECT DepartmentName,
COUNT(DISTINCT p.projectid) NumberOfProjects,
COUNT(DISTINCT o.Projectid) NumberOfOverBudgetProjects,
OfficeNumber,
Phone
FROM department d JOIN project p
ON d.DepartmentName = p.Department LEFT JOIN OverBudgetProjects o
ON d.DepartmentName = o.Department
GROUP BY p.Department;
Sample output from issuing
SELECT * FROM Projects
is
| DEPARTMENTNAME | NUMBEROFPROJECTS | NUMBEROFOVERBUDGETPROJECTS | OFFICENUMBER | PHONE |
|----------------|------------------|----------------------------|--------------|--------------|
| Accounting | 1 | 0 | BLDG01-100 | 360-285-8300 |
| Finance | 2 | 0 | BLDG01-140 | 360-285-8400 |
| Marketing | 2 | 2 | BLDG02-200 | 360-287-8700 |
Here is SQLFiddle demo
qid & accept id:
(19053225, 19055971)
query:
Count the number of occurrences grouped by some rows
soup:
Since you seem to want every row in the result individually, you cannot aggregate. Use a window function instead to get the count per day. The well known aggregate function count() can also serve as window aggregate function:
\nSELECT current_date - ped.data_envio::date AS days_out_of_stock\n ,count(*) OVER (PARTITION BY ped.data_envio::date)\n AS count_per_days_out_of_stock\n ,ped.data_envio::date AS date\n ,p.id AS product_id\n ,opl.id AS storage_id\nFROM sub_produtos_pedidos spp\nLEFT JOIN cad_produtos p ON p.cod_ean = spp.ean_produto\nLEFT JOIN sub_pedidos sp ON sp.id = spp.id_pedido\nLEFT JOIN op_logisticos opl ON opl.id = sp.id_op_logistico\nLEFT JOIN pedidos ped ON ped.id = sp.id_pedido\nWHERE spp.motivo = '201' -- code for 'not in inventory'\nORDER BY ped.data_envio::date, p.id, opl.id
\nSort order: Products having been out of stock for the longest time first.
\nNote, you can just subtract dates to get an integer in Postgres.
\nIf you want a running count in the sense of "n rows have been out of stock for this number of days or more", use:
\ncount(*) OVER (ORDER BY ped.data_envio::date) -- ascending order!\n AS running_count_per_days_out_of_stock\n
\nYou get the same count for the same day, peers are lumped together.
\n
soup wrap:
Since you seem to want every row in the result individually, you cannot aggregate. Use a window function instead to get the count per day. The well known aggregate function count() can also serve as window aggregate function:
SELECT current_date - ped.data_envio::date AS days_out_of_stock
,count(*) OVER (PARTITION BY ped.data_envio::date)
AS count_per_days_out_of_stock
,ped.data_envio::date AS date
,p.id AS product_id
,opl.id AS storage_id
FROM sub_produtos_pedidos spp
LEFT JOIN cad_produtos p ON p.cod_ean = spp.ean_produto
LEFT JOIN sub_pedidos sp ON sp.id = spp.id_pedido
LEFT JOIN op_logisticos opl ON opl.id = sp.id_op_logistico
LEFT JOIN pedidos ped ON ped.id = sp.id_pedido
WHERE spp.motivo = '201' -- code for 'not in inventory'
ORDER BY ped.data_envio::date, p.id, opl.id
Sort order: Products having been out of stock for the longest time first.
Note, you can just subtract dates to get an integer in Postgres.
If you want a running count in the sense of "n rows have been out of stock for this number of days or more", use:
count(*) OVER (ORDER BY ped.data_envio::date) -- ascending order!
AS running_count_per_days_out_of_stock
You get the same count for the same day, peers are lumped together.
qid & accept id:
(19068044, 19068152)
query:
Select from list of values received from a subquery, possibly null
soup:
Use EXISTS instead of IN: exists is clearer (IMHO) and in most cases it is faster, too. (IN (...) needs to remove/suppress duplicates and NULLs, and thus: sort the set)
\nIn this particular case: the aggregating subquery is only needed to find out that the group count() > 1. The query optimiser may not realise this, and calculate the complete group counts (over the complete set of rows) before comparing them to 1.
\nSELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n SELECT * FROM thetable ex\n WHERE ex.column1 = tt.column1 AND ex.id <> tt.id\n);\n
\nWRT the suppression of NULLs: the WHERE ex.column1 = tt.column1 clause will always yield false if either ex.column1 or tt.column1 (or both) happen to be NULL.
\n
\nUPDATE. It appears that the OP also wants the tuples with column1 IS NULL, if there a more of them. The simple solution is to use a sentinel value (a value that is not natively present in columnn1) and use that as a surrogate: (in the fragment below -1 is used as a surrogate value)
\nSELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n SELECT * FROM thetable ex\n WHERE COALESCE(ex.column1, -1) = COALESCE(tt.column1, -1)\n AND ex.id <> tt.id\n);\n
\nThe other (obvious) way would be to explicitely check for NULLs, but this will require an OR clause and a bunch of parentheses, like:
\nSELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n SELECT * FROM thetable ex\n WHERE (ex.column1 = tt.column1 \n OR (ex.column1 IS NULL AND tt.column1 IS NULL)\n )\n AND ex.id <> tt.id\n);\n
\n
soup wrap:
Use EXISTS instead of IN: exists is clearer (IMHO) and in most cases it is faster, too. (IN (...) needs to remove/suppress duplicates and NULLs, and thus: sort the set)
In this particular case: the aggregating subquery is only needed to find out that the group count() > 1. The query optimiser may not realise this, and calculate the complete group counts (over the complete set of rows) before comparing them to 1.
SELECT tt.id
FROM thetable tt
WHERE EXISTS (
SELECT * FROM thetable ex
WHERE ex.column1 = tt.column1 AND ex.id <> tt.id
);
WRT the suppression of NULLs: the WHERE ex.column1 = tt.column1 clause will always yield false if either ex.column1 or tt.column1 (or both) happen to be NULL.
UPDATE. It appears that the OP also wants the tuples with column1 IS NULL, if there a more of them. The simple solution is to use a sentinel value (a value that is not natively present in columnn1) and use that as a surrogate: (in the fragment below -1 is used as a surrogate value)
SELECT tt.id
FROM thetable tt
WHERE EXISTS (
SELECT * FROM thetable ex
WHERE COALESCE(ex.column1, -1) = COALESCE(tt.column1, -1)
AND ex.id <> tt.id
);
The other (obvious) way would be to explicitely check for NULLs, but this will require an OR clause and a bunch of parentheses, like:
SELECT tt.id
FROM thetable tt
WHERE EXISTS (
SELECT * FROM thetable ex
WHERE (ex.column1 = tt.column1
OR (ex.column1 IS NULL AND tt.column1 IS NULL)
)
AND ex.id <> tt.id
);
qid & accept id:
(19073500, 19073575)
query:
SQL split comma separated row
soup:
You can do it with pure SQL like this
\nSELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value\n FROM table1 t CROSS JOIN \n(\n SELECT a.N + b.N * 10 + 1 n\n FROM \n (SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a\n ,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b\n ORDER BY n\n) n\n WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))\n ORDER BY value\n
\nNote: The trick is to leverage tally(numbers) table and a very handy in this case MySQL function SUBSTRING_INDEX(). If you do a lot of such queries (splitting) then you might consider to populate and use a persisted tally table instead of generating it on fly with a subquery like in this example. The subquery in this example generates a sequence of numbers from 1 to 100 effectively allowing you split up to 100 delimited values per row in source table. If you need more or less you can easily adjust it.
\nOutput:
\n\n| VALUE |\n|----------------|\n| somethingA |\n| somethingB |\n| somethingC |\n| somethingElseA |\n| somethingElseB |\n
\nHere is SQLFiddle demo
\n
\nThis is how the query might look with a persisted tally table
\nSELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value\n FROM table1 t CROSS JOIN tally n\n WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))\n ORDER BY value\n
\nHere is SQLFiddle demo
\n
soup wrap:
You can do it with pure SQL like this
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value
FROM table1 t CROSS JOIN
(
SELECT a.N + b.N * 10 + 1 n
FROM
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a
,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b
ORDER BY n
) n
WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))
ORDER BY value
Note: The trick is to leverage tally(numbers) table and a very handy in this case MySQL function SUBSTRING_INDEX(). If you do a lot of such queries (splitting) then you might consider to populate and use a persisted tally table instead of generating it on fly with a subquery like in this example. The subquery in this example generates a sequence of numbers from 1 to 100 effectively allowing you split up to 100 delimited values per row in source table. If you need more or less you can easily adjust it.
Output:
| VALUE |
|----------------|
| somethingA |
| somethingB |
| somethingC |
| somethingElseA |
| somethingElseB |
Here is SQLFiddle demo
This is how the query might look with a persisted tally table
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value
FROM table1 t CROSS JOIN tally n
WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))
ORDER BY value
Here is SQLFiddle demo
qid & accept id:
(19101688, 19103866)
query:
SQL: 2 same vowels regex
soup:
This isn't pretty or short but it is simple.
\nSELECT word\nFROM tabl\nWHERE\n -- assuming case sensitive based on your example\n (word LIKE '%[Aa]%[Aa]%' AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%')\n OR\n (word LIKE '%[Ee]%[Ee]%' AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%')\n OR\n (word LIKE '%[Ii]%[Ii]%' AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%')\n OR\n (word LIKE '%[Oo]%[Oo]%' AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%')\n OR\n (word LIKE '%[Uu]%[Uu]%' AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%')\n
\nIt occurs to me that you didn't specify what to do for a place that has two of one vowel and three of another. Does that qualify? If not (Say Alaska StatE PEak Park was bad even though it has exactly 2 E's in it) then you might want this instead:
\nSELECT word \nFROM tabl \nWHERE\n -- assuming case sensitive based on your example\n ( word LIKE '%[Aa]%[Aa]%'\n OR word LIKE '%[Ee]%[Ee]%'\n OR word LIKE '%[Ii]%[Ii]%'\n OR word LIKE '%[Oo]%[Oo]%'\n OR word LIKE '%[Uu]%[Uu]%' \n )\n AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%'\n AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%'\n AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%'\n AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%'\n AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%'\n
\n
soup wrap:
This isn't pretty or short but it is simple.
SELECT word
FROM tabl
WHERE
-- assuming case sensitive based on your example
(word LIKE '%[Aa]%[Aa]%' AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%')
OR
(word LIKE '%[Ee]%[Ee]%' AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%')
OR
(word LIKE '%[Ii]%[Ii]%' AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%')
OR
(word LIKE '%[Oo]%[Oo]%' AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%')
OR
(word LIKE '%[Uu]%[Uu]%' AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%')
It occurs to me that you didn't specify what to do for a place that has two of one vowel and three of another. Does that qualify? If not (Say Alaska StatE PEak Park was bad even though it has exactly 2 E's in it) then you might want this instead:
SELECT word
FROM tabl
WHERE
-- assuming case sensitive based on your example
( word LIKE '%[Aa]%[Aa]%'
OR word LIKE '%[Ee]%[Ee]%'
OR word LIKE '%[Ii]%[Ii]%'
OR word LIKE '%[Oo]%[Oo]%'
OR word LIKE '%[Uu]%[Uu]%'
)
AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%'
AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%'
AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%'
AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%'
AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%'
qid & accept id:
(19136921, 19144070)
query:
How to count all posts belonging to multiple tags in NHibernate?
soup:
I found a way of how to get this result without a sub query and this works with nHibernate Linq. It was actually not that easy because of the subset of linq expressions which are supported by nHibernate... but anyways
\nquery:
\nvar searchTags = new[] { "C#", "C++" };\nvar result = session.Query()\n .Select(p => new { \n Id = p.Id, \n Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() \n })\n .Where(s => s.Count >= 2)\n .Count();\n
\nIt produces the following sql statment:
\nselect cast(count(*) as INT) as col_0_0_ \nfrom Posts post0_ \nwhere (\n select cast(count(*) as INT)\n from PostsToTags tags1_, Tags tag2_ \n where post0_.Id=tags1_.Post_id \n and tags1_.Tag_id=tag2_.Id \n and (tag2_.Title='C#' or tag2_.Title='C++'))>=2\n
\nyou should be able to build your user restriction into this, I hope.
\nThe following is my test setup and random data which got generated
\npublic class Post\n{\n public Post()\n {\n Tags = new List();\n }\n public virtual void AddTag(Tag tag)\n {\n this.Tags.Add(tag);\n tag.Posts.Add(this);\n }\n public virtual string Title { get; set; }\n public virtual string Content { get; set; }\n public virtual ICollection Tags { get; set; }\n public virtual int Id { get; set; }\n}\n\npublic class PostMap : ClassMap\n{\n public PostMap()\n {\n Table("Posts");\n\n Id(p => p.Id).GeneratedBy.Native();\n\n Map(p => p.Content);\n\n Map(p => p.Title);\n\n HasManyToMany(map => map.Tags).Cascade.All();\n }\n}\n\npublic class Tag\n{\n public Tag()\n {\n Posts = new List();\n }\n public virtual string Title { get; set; }\n public virtual string Description { get; set; }\n public virtual ICollection Posts { get; set; }\n public virtual int Id { get; set; }\n}\n\npublic class TagMap : ClassMap\n{\n public TagMap()\n {\n Table("Tags");\n Id(p => p.Id).GeneratedBy.Native();\n\n Map(p => p.Description);\n Map(p => p.Title);\n HasManyToMany(map => map.Posts).LazyLoad().Inverse();\n }\n}\n
\ntest run:
\nvar sessionFactory = Fluently.Configure()\n .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2012\n .ConnectionString(@"Server=.\SQLExpress;Database=TestDB;Trusted_Connection=True;")\n .ShowSql)\n .Mappings(m => m.FluentMappings\n .AddFromAssemblyOf())\n .ExposeConfiguration(cfg => new SchemaUpdate(cfg).Execute(false, true))\n .BuildSessionFactory();\n\nusing (var session = sessionFactory.OpenSession())\n{\n var t1 = new Tag() { Title = "C#", Description = "C#" };\n session.Save(t1);\n var t2 = new Tag() { Title = "C++", Description = "C/C++" };\n session.Save(t2);\n var t3 = new Tag() { Title = ".Net", Description = "Net" };\n session.Save(t3);\n var t4 = new Tag() { Title = "Java", Description = "Java" };\n session.Save(t4);\n var t5 = new Tag() { Title = "lol", Description = "lol" };\n session.Save(t5);\n var t6 = new Tag() { Title = "rofl", Description = "rofl" };\n session.Save(t6);\n var tags = session.Query().ToList();\n var r = new Random();\n\n for (int i = 0; i < 1000; i++)\n {\n var post = new Post()\n {\n Title = "Title" + i,\n Content = "Something awesome" + i,\n };\n\n var manyTags = r.Next(1, 3);\n\n while (post.Tags.Count() < manyTags)\n {\n var index = r.Next(0, 6);\n if (!post.Tags.Contains(tags[index]))\n {\n post.AddTag(tags[index]);\n }\n }\n\n session.Save(post);\n }\n session.Flush();\n\n /* query test */\n var searchTags = new[] { "C#", "C++" };\n var result = session.Query()\n .Select(p => new { \n Id = p.Id, \n Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() \n })\n .Where(s => s.Count >= 2)\n .Count();\n\n var resultOriginal = session.CreateQuery(@"\n SELECT COUNT(*) \n FROM \n (\n SELECT count(Posts.Id)P FROM Posts\n LEFT JOIN PostsToTags ON Posts.Id=PostsToTags.Post_id \n LEFT JOIN Tags ON PostsToTags.Tag_id=Tags.Id \n WHERE Tags.Title in ('c#', 'C++')\n GROUP BY Posts.Id \n HAVING COUNT(Posts.Id)>=2\n )t\n ").List()[0];\n\n var isEqual = result == (int)resultOriginal;\n}\n
\nAs you can see at the end I do test against your original query (without the users) and it is actually the same count.
\n
soup wrap:
I found a way of how to get this result without a sub query and this works with nHibernate Linq. It was actually not that easy because of the subset of linq expressions which are supported by nHibernate... but anyways
query:
var searchTags = new[] { "C#", "C++" };
var result = session.Query()
.Select(p => new {
Id = p.Id,
Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count()
})
.Where(s => s.Count >= 2)
.Count();
It produces the following sql statment:
select cast(count(*) as INT) as col_0_0_
from Posts post0_
where (
select cast(count(*) as INT)
from PostsToTags tags1_, Tags tag2_
where post0_.Id=tags1_.Post_id
and tags1_.Tag_id=tag2_.Id
and (tag2_.Title='C#' or tag2_.Title='C++'))>=2
you should be able to build your user restriction into this, I hope.
The following is my test setup and random data which got generated
public class Post
{
public Post()
{
Tags = new List();
}
public virtual void AddTag(Tag tag)
{
this.Tags.Add(tag);
tag.Posts.Add(this);
}
public virtual string Title { get; set; }
public virtual string Content { get; set; }
public virtual ICollection Tags { get; set; }
public virtual int Id { get; set; }
}
public class PostMap : ClassMap
{
public PostMap()
{
Table("Posts");
Id(p => p.Id).GeneratedBy.Native();
Map(p => p.Content);
Map(p => p.Title);
HasManyToMany(map => map.Tags).Cascade.All();
}
}
public class Tag
{
public Tag()
{
Posts = new List();
}
public virtual string Title { get; set; }
public virtual string Description { get; set; }
public virtual ICollection Posts { get; set; }
public virtual int Id { get; set; }
}
public class TagMap : ClassMap
{
public TagMap()
{
Table("Tags");
Id(p => p.Id).GeneratedBy.Native();
Map(p => p.Description);
Map(p => p.Title);
HasManyToMany(map => map.Posts).LazyLoad().Inverse();
}
}
test run:
var sessionFactory = Fluently.Configure()
.Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2012
.ConnectionString(@"Server=.\SQLExpress;Database=TestDB;Trusted_Connection=True;")
.ShowSql)
.Mappings(m => m.FluentMappings
.AddFromAssemblyOf())
.ExposeConfiguration(cfg => new SchemaUpdate(cfg).Execute(false, true))
.BuildSessionFactory();
using (var session = sessionFactory.OpenSession())
{
var t1 = new Tag() { Title = "C#", Description = "C#" };
session.Save(t1);
var t2 = new Tag() { Title = "C++", Description = "C/C++" };
session.Save(t2);
var t3 = new Tag() { Title = ".Net", Description = "Net" };
session.Save(t3);
var t4 = new Tag() { Title = "Java", Description = "Java" };
session.Save(t4);
var t5 = new Tag() { Title = "lol", Description = "lol" };
session.Save(t5);
var t6 = new Tag() { Title = "rofl", Description = "rofl" };
session.Save(t6);
var tags = session.Query().ToList();
var r = new Random();
for (int i = 0; i < 1000; i++)
{
var post = new Post()
{
Title = "Title" + i,
Content = "Something awesome" + i,
};
var manyTags = r.Next(1, 3);
while (post.Tags.Count() < manyTags)
{
var index = r.Next(0, 6);
if (!post.Tags.Contains(tags[index]))
{
post.AddTag(tags[index]);
}
}
session.Save(post);
}
session.Flush();
/* query test */
var searchTags = new[] { "C#", "C++" };
var result = session.Query()
.Select(p => new {
Id = p.Id,
Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count()
})
.Where(s => s.Count >= 2)
.Count();
var resultOriginal = session.CreateQuery(@"
SELECT COUNT(*)
FROM
(
SELECT count(Posts.Id)P FROM Posts
LEFT JOIN PostsToTags ON Posts.Id=PostsToTags.Post_id
LEFT JOIN Tags ON PostsToTags.Tag_id=Tags.Id
WHERE Tags.Title in ('c#', 'C++')
GROUP BY Posts.Id
HAVING COUNT(Posts.Id)>=2
)t
").List()[0];
var isEqual = result == (int)resultOriginal;
}
As you can see at the end I do test against your original query (without the users) and it is actually the same count.
qid & accept id:
(19155321, 19556418)
query:
MySQL paging large data based on a specific order
soup:
Firstly you need to create an index based on the date field. This allows the rows to be retrieved in order without having to sort the entire table every time a request is made.
\nSecondly, paging based on index gets slower the deeper you delve into the result set. To illustrate:
\n\nORDER BY indexedcolumn LIMIT 0, 200 is very fast because it only has to scan 200 rows of the index.
\nORDER BY indexedcolumn LIMIT 200, 200 is relatively fast, but requires scanning 400 rows of the index.
\nORDER BY indexedcolumn LIMIT 660000, 200 is very slow because it requires scanning 660,200 rows of the index.
\nNote: even so, this may still be significantly faster than not having an index at all.
\n
\nYou can fix this in a few different ways.
\n\nImplement value-based paging, so you're paging based on the value of the last result on the previous page. For example:
\nWHERE indexedcolumn>[lastval] ORDER BY indexedcolumn LIMIT 200 replacing [lastval] with the value of the last result of the current page. The index allows random access to a particular value, and proceeding forward or backwards from that value.
\nOnly allow users to view the first X rows (eg. 1000). This is no good if the value they want is the 2529th value.
\nThink of some logical way of breaking up your large table, for example by the first letter, the year, etc so users never have to encounter the entire result set of millions of rows, instead they need to drill down into a specific subset first, which will be a smaller set and quicker to sort.
\n
\nIf you're combining a WHERE and an ORDER BY you'll need to reflect this in the design of your index to enable MySQL to continue to benefit from the index for sorting. For example if your query is:
\nSELECT * FROM mytable WHERE year='2012' ORDER BY date LIMIT 0, 200\n
\nThen your index will need to be on two columns (year, date) in that order.
\nIf your query is:
\nSELECT * FROM mytable WHERE firstletter='P' ORDER BY date LIMIT 0, 200\n
\nThen your index will need to be on the two columns (firstletter, date) in that order.
\nThe idea is that an index on multiple columns allows sorting by any column as long as you specified previous columns to be constants (single values) in a condition. So an index on A, B, C, D and E allows sorting by C if you specify A and B to be constants in a WHERE condition. A and B cannot be ranges.
\n
soup wrap:
Firstly you need to create an index based on the date field. This allows the rows to be retrieved in order without having to sort the entire table every time a request is made.
Secondly, paging based on index gets slower the deeper you delve into the result set. To illustrate:
ORDER BY indexedcolumn LIMIT 0, 200 is very fast because it only has to scan 200 rows of the index.
ORDER BY indexedcolumn LIMIT 200, 200 is relatively fast, but requires scanning 400 rows of the index.
ORDER BY indexedcolumn LIMIT 660000, 200 is very slow because it requires scanning 660,200 rows of the index.
Note: even so, this may still be significantly faster than not having an index at all.
You can fix this in a few different ways.
Implement value-based paging, so you're paging based on the value of the last result on the previous page. For example:
WHERE indexedcolumn>[lastval] ORDER BY indexedcolumn LIMIT 200 replacing [lastval] with the value of the last result of the current page. The index allows random access to a particular value, and proceeding forward or backwards from that value.
Only allow users to view the first X rows (eg. 1000). This is no good if the value they want is the 2529th value.
Think of some logical way of breaking up your large table, for example by the first letter, the year, etc so users never have to encounter the entire result set of millions of rows, instead they need to drill down into a specific subset first, which will be a smaller set and quicker to sort.
If you're combining a WHERE and an ORDER BY you'll need to reflect this in the design of your index to enable MySQL to continue to benefit from the index for sorting. For example if your query is:
SELECT * FROM mytable WHERE year='2012' ORDER BY date LIMIT 0, 200
Then your index will need to be on two columns (year, date) in that order.
If your query is:
SELECT * FROM mytable WHERE firstletter='P' ORDER BY date LIMIT 0, 200
Then your index will need to be on the two columns (firstletter, date) in that order.
The idea is that an index on multiple columns allows sorting by any column as long as you specified previous columns to be constants (single values) in a condition. So an index on A, B, C, D and E allows sorting by C if you specify A and B to be constants in a WHERE condition. A and B cannot be ranges.
qid & accept id:
(19163959, 19164011)
query:
Yesterday's date in where clase with HH:MM:SS
soup:
You could use
\nTRUNC(TableT.STARTDATETIME) = TRUNC(sysdate-1)\n
\nfor this purpose to truncate both dates to the day on both side of the check. However, for this to be efficient, you'd need a function index on TRUNC(TableT.STARTDATETIME).
\nMaybe better in general from a performance aspect:
\nTableT.STARTDATETIME >= trunc(sysdate-1) AND TableT.STARTDATETIME < trunc(sysdate);\n
\nThis includes yesterday 00:00:00 (the >= ), but excludes today 00:00:00 (the <).
\nWarning! Keep in mind, that for TIMESTAMP columns - while tempting because of its simplicity - don't use 23:59:59 as end time, as the 1 second time slot between 23:59:59 and 00:00:00 might contain data too - and this gap will leave them out of processing...
\n
soup wrap:
You could use
TRUNC(TableT.STARTDATETIME) = TRUNC(sysdate-1)
for this purpose to truncate both dates to the day on both side of the check. However, for this to be efficient, you'd need a function index on TRUNC(TableT.STARTDATETIME).
Maybe better in general from a performance aspect:
TableT.STARTDATETIME >= trunc(sysdate-1) AND TableT.STARTDATETIME < trunc(sysdate);
This includes yesterday 00:00:00 (the >= ), but excludes today 00:00:00 (the <).
Warning! Keep in mind, that for TIMESTAMP columns - while tempting because of its simplicity - don't use 23:59:59 as end time, as the 1 second time slot between 23:59:59 and 00:00:00 might contain data too - and this gap will leave them out of processing...
qid & accept id:
(19181164, 19293511)
query:
how to change font in mysql database to store unicode charactors
soup:
We assume we have a DB with table articles, and a column named posts, which will save the article written in your blog. Best part, we know that all major DB’s support UTF8. And we shall explore that feature.
\nNow we write a article in hindi, हेल्लो वर्ल्ड
\nIf the UTF8 is not specified, you should see something like ?????? in ur DB else u shud see the hindi data.
\nCode:
\nFirst check for UTF8 compatibility with this query. If it supports you should see the output as
\n“Character_set_system”| “UTF8″\n
\nSHOW VARIABLES LIKE
\n‘character_set_system’;\n
\nNow that being checked, alter the table and just modify the column, Posts in our above example and specify it as UTF8
\nALTER TABLE articles MODIFY Posts VARCHAR(20) CHARACTER SET UTF8;\n
\nNow, try to insert the hindi value and save it. Query it and u shud see the hindi text
\n
soup wrap:
We assume we have a DB with table articles, and a column named posts, which will save the article written in your blog. Best part, we know that all major DB’s support UTF8. And we shall explore that feature.
Now we write a article in hindi, हेल्लो वर्ल्ड
If the UTF8 is not specified, you should see something like ?????? in ur DB else u shud see the hindi data.
Code:
First check for UTF8 compatibility with this query. If it supports you should see the output as
“Character_set_system”| “UTF8″
SHOW VARIABLES LIKE
‘character_set_system’;
Now that being checked, alter the table and just modify the column, Posts in our above example and specify it as UTF8
ALTER TABLE articles MODIFY Posts VARCHAR(20) CHARACTER SET UTF8;
Now, try to insert the hindi value and save it. Query it and u shud see the hindi text
qid & accept id:
(19189050, 19189434)
query:
T-SQL Query to Select current, previous or next week
soup:
DECLARE @CurrentDate SMALLDATETIME; -- Or DATE\n\nSET @CurrentDate = '20131004'\n\nSELECT DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7, 0) AS FirstDayOfTheWeek,\n DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + 4, 0) AS LastDayOfTheWeek\n
\nResults:
\nFirstDayOfTheWeek LastDayOfTheWeek\n----------------------- -----------------------\n2013-09-30 00:00:00.000 2013-10-04 00:00:00.000\n
\nAll days between Monday and Friday:
\nDECLARE @CurrentDate DATE;\nDECLARE @WeekNum SMALLINT;\n\nSET @CurrentDate = '20131004'\nSET @WeekNum = +1; -- -1 Previous WK, 0 Current WK, +1 Next WK\n\nSELECT DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime\nFROM (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0))) fdow(FirstDayOfTheWeek)\nCROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)\n\n/*\nDayAsDateTime\n-----------------------\n2013-10-07 00:00:00.000\n2013-10-08 00:00:00.000\n2013-10-09 00:00:00.000\n2013-10-10 00:00:00.000\n2013-10-11 00:00:00.000\n*/\n\nSELECT *\nFROM\n(\nSELECT DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime, dof.DayNum\nFROM (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0))) fdow(FirstDayOfTheWeek)\nCROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)\n) src \nPIVOT( MAX(DayAsDateTime) FOR DayNum IN ([0], [1], [2], [3], [4]) ) pvt\n\n/*\n0 1 2 3 4\n----------------------- ----------------------- ----------------------- ----------------------- -----------------------\n2013-10-07 00:00:00.000 2013-10-08 00:00:00.000 2013-10-09 00:00:00.000 2013-10-10 00:00:00.000 2013-10-11 00:00:00.000\n*/\n
\n
soup wrap:
DECLARE @CurrentDate SMALLDATETIME; -- Or DATE
SET @CurrentDate = '20131004'
SELECT DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7, 0) AS FirstDayOfTheWeek,
DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + 4, 0) AS LastDayOfTheWeek
Results:
FirstDayOfTheWeek LastDayOfTheWeek
----------------------- -----------------------
2013-09-30 00:00:00.000 2013-10-04 00:00:00.000
All days between Monday and Friday:
DECLARE @CurrentDate DATE;
DECLARE @WeekNum SMALLINT;
SET @CurrentDate = '20131004'
SET @WeekNum = +1; -- -1 Previous WK, 0 Current WK, +1 Next WK
SELECT DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime
FROM (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0))) fdow(FirstDayOfTheWeek)
CROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)
/*
DayAsDateTime
-----------------------
2013-10-07 00:00:00.000
2013-10-08 00:00:00.000
2013-10-09 00:00:00.000
2013-10-10 00:00:00.000
2013-10-11 00:00:00.000
*/
SELECT *
FROM
(
SELECT DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime, dof.DayNum
FROM (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0))) fdow(FirstDayOfTheWeek)
CROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)
) src
PIVOT( MAX(DayAsDateTime) FOR DayNum IN ([0], [1], [2], [3], [4]) ) pvt
/*
0 1 2 3 4
----------------------- ----------------------- ----------------------- ----------------------- -----------------------
2013-10-07 00:00:00.000 2013-10-08 00:00:00.000 2013-10-09 00:00:00.000 2013-10-10 00:00:00.000 2013-10-11 00:00:00.000
*/
qid & accept id:
(19211707, 19212599)
query:
How to query specific category or all categories inside the same query?
soup:
Building on what @JamesMarks was offering, it would be simpler to use a query like
\n$query = "SELECT * FROM table WHERE category = ? OR 1 = ?;"\n
\nThen pass your $category for the first parameter, and either 1 or 0 as the second parameter. If you pass 1, then the second term becomes 1 = 1. That's always true, so the whole expression is always true. If you pass 0, then the second term is 1 = 0 and that's always false, but then the whole expression will be true only if category = $category matches.
\nThat's simpler and better style than designating a special value 0 for "any category."
\nAn alternative solution is to build the query dynamically:
\n$where = array();\nif ($category) {\n $where[] = "category = ?";\n $params[] = $category;\n}\n\n... perhaps add more terms to $where conditionally ...\n\n$query = "SELECT * FROM table";\nif ($where) {\n $query .= " WHERE " . implode(" AND ", $where);\n}\n
\n
soup wrap:
Building on what @JamesMarks was offering, it would be simpler to use a query like
$query = "SELECT * FROM table WHERE category = ? OR 1 = ?;"
Then pass your $category for the first parameter, and either 1 or 0 as the second parameter. If you pass 1, then the second term becomes 1 = 1. That's always true, so the whole expression is always true. If you pass 0, then the second term is 1 = 0 and that's always false, but then the whole expression will be true only if category = $category matches.
That's simpler and better style than designating a special value 0 for "any category."
An alternative solution is to build the query dynamically:
$where = array();
if ($category) {
$where[] = "category = ?";
$params[] = $category;
}
... perhaps add more terms to $where conditionally ...
$query = "SELECT * FROM table";
if ($where) {
$query .= " WHERE " . implode(" AND ", $where);
}
qid & accept id:
(19256123, 19256721)
query:
In SQL query to find duplicates in one column then use a second column to determine which record to return
soup:
I think that should do the job on a DB2 as well:
\nSELECT Column1, Column2, \n MAX (CASE Column3 WHEN 2 THEN 2 ELSE NULL END)\n FROM t\n GROUP BY Column1, Column2;\n
\nSee this Fiddle for an ORACLE database.
\nResult:
\nCOLUMN1 COLUMN2 COLUMN3\n--------- ----------- -------\n134024323 81999000004 (null)\n127001126 90489495251 2\n346122930 346000016 2\n346207637 346000016 (null)\n
\n
soup wrap:
I think that should do the job on a DB2 as well:
SELECT Column1, Column2,
MAX (CASE Column3 WHEN 2 THEN 2 ELSE NULL END)
FROM t
GROUP BY Column1, Column2;
See this Fiddle for an ORACLE database.
Result:
COLUMN1 COLUMN2 COLUMN3
--------- ----------- -------
134024323 81999000004 (null)
127001126 90489495251 2
346122930 346000016 2
346207637 346000016 (null)
qid & accept id:
(19268811, 19268839)
query:
Set default value in query when value is null
soup:
Use the following:
\nSELECT RegName,\n RegEmail,\n RegPhone,\n RegOrg,\n RegCountry,\n DateReg,\n ISNULL(Website,'no website') AS WebSite \nFROM RegTakePart \nWHERE Reject IS NULL\n
\nor as, @Lieven noted:
\nSELECT RegName,\n RegEmail,\n RegPhone,\n RegOrg,\n RegCountry,\n DateReg,\n COALESCE(Website,'no website') AS WebSite \nFROM RegTakePart \nWHERE Reject IS NULL\n
\nThe dynamic of COALESCE is that you may define more arguments, so if the first is null then get the second, if the second is null get the third etc etc...
\n
soup wrap:
Use the following:
SELECT RegName,
RegEmail,
RegPhone,
RegOrg,
RegCountry,
DateReg,
ISNULL(Website,'no website') AS WebSite
FROM RegTakePart
WHERE Reject IS NULL
or as, @Lieven noted:
SELECT RegName,
RegEmail,
RegPhone,
RegOrg,
RegCountry,
DateReg,
COALESCE(Website,'no website') AS WebSite
FROM RegTakePart
WHERE Reject IS NULL
The dynamic of COALESCE is that you may define more arguments, so if the first is null then get the second, if the second is null get the third etc etc...
qid & accept id:
(19270316, 19276815)
query:
Count sequential matching words in two strings oracle
soup:
Personally, in this situation, I would choose PL/SQL code over plain SQL. Something like:
\nPackage specification:
\ncreate or replace package PKG is\n function NumOfSeqWords(\n p_str1 in varchar2,\n p_str2 in varchar2\n ) return number;\nend;\n
\nPackage body:
\ncreate or replace package body PKG is\n function NumOfSeqWords(\n p_str1 in varchar2,\n p_str2 in varchar2\n ) return number\n is\n l_str1 varchar2(4000) := p_str1;\n l_str2 varchar2(4000) := p_str2;\n l_res number default 0;\n l_del_pos1 number;\n l_del_pos2 number;\n l_word1 varchar2(1000);\n l_word2 varchar2(1000);\n begin\n loop\n l_del_pos1 := instr(l_str1, ' ');\n l_del_pos2 := instr(l_str2, ' ');\n case l_del_pos1\n when 0 \n then l_word1 := l_str1;\n l_str1 := ''; \n else l_word1 := substr(l_str1, 1, l_del_pos1 - 1);\n end case;\n case l_del_pos2\n when 0 \n then l_word2 := l_str2;\n l_str2 := ''; \n else l_word2 := substr(l_str2, 1, l_del_pos2 - 1);\n end case;\n exit when (l_word1 <> l_word2) or \n ((l_word1 is null) or (l_word2 is null));\n\n l_res := l_res + 1;\n l_str1 := substr(l_str1, l_del_pos1 + 1);\n l_str2 := substr(l_str2, l_del_pos2 + 1);\n end loop;\n return l_res;\n end;\nend;\n
\nTest case:
\n with t1(Id1, col1, col2) as(\n select 1, 'foo bar live' ,'foo bar' from dual union all\n select 2, 'foo live tele' ,'foo tele' from dual union all\n select 3, 'bar foo live' ,'foo bar live'from dual\n )\n select id1\n , col1\n , col2\n , pkg.NumOfSeqWords(col1, col2) as res\n from t1\n ;\n
\nResult:
\n ID1 COL1 COL2 RES\n---------- ------------- ------------ ----------\n 1 foo bar live foo bar 2\n 2 foo live tele foo tele 1\n 3 bar foo live foo bar live 0\n
\n
soup wrap:
Personally, in this situation, I would choose PL/SQL code over plain SQL. Something like:
Package specification:
create or replace package PKG is
function NumOfSeqWords(
p_str1 in varchar2,
p_str2 in varchar2
) return number;
end;
Package body:
create or replace package body PKG is
function NumOfSeqWords(
p_str1 in varchar2,
p_str2 in varchar2
) return number
is
l_str1 varchar2(4000) := p_str1;
l_str2 varchar2(4000) := p_str2;
l_res number default 0;
l_del_pos1 number;
l_del_pos2 number;
l_word1 varchar2(1000);
l_word2 varchar2(1000);
begin
loop
l_del_pos1 := instr(l_str1, ' ');
l_del_pos2 := instr(l_str2, ' ');
case l_del_pos1
when 0
then l_word1 := l_str1;
l_str1 := '';
else l_word1 := substr(l_str1, 1, l_del_pos1 - 1);
end case;
case l_del_pos2
when 0
then l_word2 := l_str2;
l_str2 := '';
else l_word2 := substr(l_str2, 1, l_del_pos2 - 1);
end case;
exit when (l_word1 <> l_word2) or
((l_word1 is null) or (l_word2 is null));
l_res := l_res + 1;
l_str1 := substr(l_str1, l_del_pos1 + 1);
l_str2 := substr(l_str2, l_del_pos2 + 1);
end loop;
return l_res;
end;
end;
Test case:
with t1(Id1, col1, col2) as(
select 1, 'foo bar live' ,'foo bar' from dual union all
select 2, 'foo live tele' ,'foo tele' from dual union all
select 3, 'bar foo live' ,'foo bar live'from dual
)
select id1
, col1
, col2
, pkg.NumOfSeqWords(col1, col2) as res
from t1
;
Result:
ID1 COL1 COL2 RES
---------- ------------- ------------ ----------
1 foo bar live foo bar 2
2 foo live tele foo tele 1
3 bar foo live foo bar live 0
qid & accept id:
(19270491, 19272776)
query:
What is the best way to SELECT data when there are two possible tables holding the detail information?
soup:
What I've done before in a similar situation is introduce a raw query with all possible values, along with the precedence of the value; then use a ROW_NUMBER outer query to get just the value with the highest precedence.
\nI'm going to use your (excellent) sample data, and everything goes after the insert into @GroupWeight.
\nThis is our raw data:
\n-- the product weights (use INNER JOIN to only find \n-- the products with their own weights)\nSELECT\n p.ProductId,\n p.ProductName,\n m.MaterialId,\n m.MaterialName,\n pw.Weight,\n 'Product' WeightSource,\n 20 Precedence\nFROM\n @Product p\n INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId\n INNER JOIN @Material m ON m.MaterialId = pw.MaterialId\nUNION ALL\n-- the group weight\nSELECT\n p.ProductId,\n p.ProductName,\n m.MaterialId,\n m.MaterialName,\n gw.Weight,\n 'Group' WeightSource,\n 10 Precedence\nFROM\n @Product p\n INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId\n INNER JOIN @Material m ON m.MaterialId = gw.MaterialId\n
\nThis will return one row for each product-material with a specific weight, plus one row for each product-material. Each row indicates whether it is a product weight or a group weight.
\nWe can then number the rows, ordering by precedence:
\n-- assume the above is in a CTE named AllWeights\nSELECT \n *,\n ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId \n ORDER BY Precedence DESC) rn\nFROM \n AllWeights\n
\nWhich gives us the same data with an additional indication of which row for a given product-material is the relevant one, so finally we can get just that:
\n-- assume the above is in a CTE named RowNumbered\nSELECT\n ProductName,\n MaterialName,\n WeightSource,\n Weight\nFROM\n RowNumbered\nWHERE\n rn = 1\n;\n
\nAnd we're done.
\n
\nPutting it all together:
\n;WITH AllWeights AS (\n-- the product weights (use INNER JOIN to only find \n-- the products with their own weights)\nSELECT\n p.ProductId,\n p.ProductName,\n m.MaterialId,\n m.MaterialName,\n pw.Weight,\n 'Product' WeightSource,\n 20 Precedence\nFROM\n @Product p\n INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId\n INNER JOIN @Material m ON m.MaterialId = pw.MaterialId\nUNION ALL\n-- the group weight\nSELECT\n p.ProductId,\n p.ProductName,\n m.MaterialId,\n m.MaterialName,\n gw.Weight,\n 'Group' WeightSource,\n 10 Precedence\nFROM\n @Product p\n INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId\n INNER JOIN @Material m ON m.MaterialId = gw.MaterialId\n),\nRowNumbered AS (\nSELECT \n *,\n ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId \n ORDER BY Precedence DESC) rn\nFROM \n AllWeights\n)\nSELECT\n ProductName,\n MaterialName,\n WeightSource,\n Weight\nFROM\n RowNumbered\nWHERE\n rn = 1\n;\n
\nOutput:
\nProductName MaterialName WeightSource Weight\n-------------------- ------------ ------------ ------------\nCan of soup Paper Product 5.20\nCan of soup Steel Product 23.10\nCan of beans Paper Group 5.20\nCan of beans Steel Group 23.10\nBottle of beer Paper Product 4.60\nBottle of beer Steel Product 2.40\nBottle of beer Glass Product 185.90\nBottle of wine Paper Product 5.10\nBottle of wine Steel Product 2.60\nBottle of wine Glass Product 650.40\nBottle of sauce Paper Group 4.85\nBottle of sauce Steel Group 2.50\nBottle of sauce Glass Group 418.15\n
\nwhich except for order is the same as yours, I think.
\nYou'll have to check performance yourself, of course.
\n
soup wrap:
What I've done before in a similar situation is introduce a raw query with all possible values, along with the precedence of the value; then use a ROW_NUMBER outer query to get just the value with the highest precedence.
I'm going to use your (excellent) sample data, and everything goes after the insert into @GroupWeight.
This is our raw data:
-- the product weights (use INNER JOIN to only find
-- the products with their own weights)
SELECT
p.ProductId,
p.ProductName,
m.MaterialId,
m.MaterialName,
pw.Weight,
'Product' WeightSource,
20 Precedence
FROM
@Product p
INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId
INNER JOIN @Material m ON m.MaterialId = pw.MaterialId
UNION ALL
-- the group weight
SELECT
p.ProductId,
p.ProductName,
m.MaterialId,
m.MaterialName,
gw.Weight,
'Group' WeightSource,
10 Precedence
FROM
@Product p
INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId
INNER JOIN @Material m ON m.MaterialId = gw.MaterialId
This will return one row for each product-material with a specific weight, plus one row for each product-material. Each row indicates whether it is a product weight or a group weight.
We can then number the rows, ordering by precedence:
-- assume the above is in a CTE named AllWeights
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId
ORDER BY Precedence DESC) rn
FROM
AllWeights
Which gives us the same data with an additional indication of which row for a given product-material is the relevant one, so finally we can get just that:
-- assume the above is in a CTE named RowNumbered
SELECT
ProductName,
MaterialName,
WeightSource,
Weight
FROM
RowNumbered
WHERE
rn = 1
;
And we're done.
Putting it all together:
;WITH AllWeights AS (
-- the product weights (use INNER JOIN to only find
-- the products with their own weights)
SELECT
p.ProductId,
p.ProductName,
m.MaterialId,
m.MaterialName,
pw.Weight,
'Product' WeightSource,
20 Precedence
FROM
@Product p
INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId
INNER JOIN @Material m ON m.MaterialId = pw.MaterialId
UNION ALL
-- the group weight
SELECT
p.ProductId,
p.ProductName,
m.MaterialId,
m.MaterialName,
gw.Weight,
'Group' WeightSource,
10 Precedence
FROM
@Product p
INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId
INNER JOIN @Material m ON m.MaterialId = gw.MaterialId
),
RowNumbered AS (
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId
ORDER BY Precedence DESC) rn
FROM
AllWeights
)
SELECT
ProductName,
MaterialName,
WeightSource,
Weight
FROM
RowNumbered
WHERE
rn = 1
;
Output:
ProductName MaterialName WeightSource Weight
-------------------- ------------ ------------ ------------
Can of soup Paper Product 5.20
Can of soup Steel Product 23.10
Can of beans Paper Group 5.20
Can of beans Steel Group 23.10
Bottle of beer Paper Product 4.60
Bottle of beer Steel Product 2.40
Bottle of beer Glass Product 185.90
Bottle of wine Paper Product 5.10
Bottle of wine Steel Product 2.60
Bottle of wine Glass Product 650.40
Bottle of sauce Paper Group 4.85
Bottle of sauce Steel Group 2.50
Bottle of sauce Glass Group 418.15
which except for order is the same as yours, I think.
You'll have to check performance yourself, of course.
qid & accept id:
(19279889, 19280129)
query:
Removing the prefix of a string in TSQL
soup:
Try this :
\nRIGHT(words, LEN(words) - (LEN(prefix+'?')-1))\n
\nEDITED :
\nMay be you will find this one "cleaner" :
\nRIGHT(words, LEN(words) - DATALENGTH(CONVERT(VARCHAR(100),prefix)))\n
\n
soup wrap:
Try this :
RIGHT(words, LEN(words) - (LEN(prefix+'?')-1))
EDITED :
May be you will find this one "cleaner" :
RIGHT(words, LEN(words) - DATALENGTH(CONVERT(VARCHAR(100),prefix)))
qid & accept id:
(19307842, 19307904)
query:
Calling a stored procedure with a select
soup:
The stored procedure is populating RT but you then need to select out of it:
\nCREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)\nAS\n RT MDC_CAT_PARAMETROS%ROWTYPE;\nBEGIN\n SELECT * INTO RT FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';\n OPEN results FOR SELECT * FROM RT;\nEND MDC_UTIL_PROCEDURE; \n
\nor you could simplify it to get rid of the RT variable:
\nCREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)\nAS\nBEGIN\n OPEN results FOR \n SELECT * FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';\nEND MDC_UTIL_PROCEDURE; \n
\n
soup wrap:
The stored procedure is populating RT but you then need to select out of it:
CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)
AS
RT MDC_CAT_PARAMETROS%ROWTYPE;
BEGIN
SELECT * INTO RT FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';
OPEN results FOR SELECT * FROM RT;
END MDC_UTIL_PROCEDURE;
or you could simplify it to get rid of the RT variable:
CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)
AS
BEGIN
OPEN results FOR
SELECT * FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';
END MDC_UTIL_PROCEDURE;
qid & accept id:
(19329816, 19333964)
query:
Hierarchical Query( how to retrieve middle nodes)
soup:
You can use the CONNECT_BY_IS_LEAF pseudo column for this.
\nselect level, first_name ||' '|| last_name "FullName" \nfrom more_employees\nwhere connect_by_isleaf = 0 and manager_id is not null\nstart with employee_id = 1\nconnect by prior employee_id = manager_id;\n
\nYou can also use that to get all leafs:
\nselect level, first_name ||' '|| last_name "FullName" \nfrom more_employees\nwhere connect_by_isleaf = 1\nstart with employee_id = 1\nconnect by prior employee_id = manager_id;\n
\nWhich is probably faster than your solution with a sub-select
\nHere is an SQLFiddle example: http://sqlfiddle.com/#!4/511d9/2
\n
soup wrap:
You can use the CONNECT_BY_IS_LEAF pseudo column for this.
select level, first_name ||' '|| last_name "FullName"
from more_employees
where connect_by_isleaf = 0 and manager_id is not null
start with employee_id = 1
connect by prior employee_id = manager_id;
You can also use that to get all leafs:
select level, first_name ||' '|| last_name "FullName"
from more_employees
where connect_by_isleaf = 1
start with employee_id = 1
connect by prior employee_id = manager_id;
Which is probably faster than your solution with a sub-select
Here is an SQLFiddle example: http://sqlfiddle.com/#!4/511d9/2
qid & accept id:
(19353473, 19353591)
query:
SQL multiple table join throwing dupes
soup:
Your first couple joins (Video/VideoTags/Tags) yields a table like so:
\nVideoID = 1 will bring in TagID = 2,5 (Dogs, orlyowl) so you have this\n\n| 1 | Dogs\n| 1 | orlyowl\n
\nWhen you join to VideoChannels, it duplicates the above entries for each channel
\n| 1 | Dogs | 1\n| 1 | orlyowl | 1\n| 1 | Dogs | 4\n| 1 | orlyowl | 4\n| 1 | Dogs | 6\n| 1 | orlyowl | 6\n
\ngroup_concat has a DISTINCT attribute
\nselect v.*\n , group_concat(distinct t.tagName) Tags\n , group_concat(distinct c.channelName) Channels\nfrom videos as v \ninner join videoTags as vt on v.videoId = vt.videoid\ninner join tags as t on t.tagId = vt.tagId\ninner join videoChannels as vc on v.videoId = vc.videoId\ninner join channels as c on c.channelId = vc.channelId\ngroup by v.videoId;\n
\n
soup wrap:
Your first couple joins (Video/VideoTags/Tags) yields a table like so:
VideoID = 1 will bring in TagID = 2,5 (Dogs, orlyowl) so you have this
| 1 | Dogs
| 1 | orlyowl
When you join to VideoChannels, it duplicates the above entries for each channel
| 1 | Dogs | 1
| 1 | orlyowl | 1
| 1 | Dogs | 4
| 1 | orlyowl | 4
| 1 | Dogs | 6
| 1 | orlyowl | 6
group_concat has a DISTINCT attribute
select v.*
, group_concat(distinct t.tagName) Tags
, group_concat(distinct c.channelName) Channels
from videos as v
inner join videoTags as vt on v.videoId = vt.videoid
inner join tags as t on t.tagId = vt.tagId
inner join videoChannels as vc on v.videoId = vc.videoId
inner join channels as c on c.channelId = vc.channelId
group by v.videoId;
qid & accept id:
(19356906, 19357183)
query:
Show modified strings that appear more than once
soup:
Just add
\nGROUP BY Keydomain\nHAVING COUNT(*) > 1\n
\nto your query.
\nEDIT:
\n\nCould you tell me if there is a way to list the complete domains one by one with your addition?
\n
\nSELECT * FROM\n(\nSELECT \nCASE \nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))\nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))\nEND as Keydomain\nFROM sites\nGROUP BY Keydomain\nHAVING COUNT(*) > 1\n) d1\nINNER JOIN\n(\nSELECT id, domain,\nCASE \nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))\nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))\nEND as Keydomain\nFROM sites\n) d2\nON d1.Keydomain = d2.Keydomain\n
\n
soup wrap:
Just add
GROUP BY Keydomain
HAVING COUNT(*) > 1
to your query.
EDIT:
Could you tell me if there is a way to list the complete domains one by one with your addition?
SELECT * FROM
(
SELECT
CASE
WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))
WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))
END as Keydomain
FROM sites
GROUP BY Keydomain
HAVING COUNT(*) > 1
) d1
INNER JOIN
(
SELECT id, domain,
CASE
WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))
WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))
END as Keydomain
FROM sites
) d2
ON d1.Keydomain = d2.Keydomain
qid & accept id:
(19359464, 19359558)
query:
create 1 column from 2 column with in SQL
soup:
Please try:
\nselect \n a.col2+'#'+b.col2 \nfrom \n T1 a, T1 b \nwhere a.col1='Con'and \n b.col1='Arr'\n
\nOR
\nselect \n a.col2+'#'+b.col2 \nfrom \n T1 a CROSS JOIN T1 b \nwhere a.col1='Con'and \n b.col1='Arr'\n
\n
soup wrap:
Please try:
select
a.col2+'#'+b.col2
from
T1 a, T1 b
where a.col1='Con'and
b.col1='Arr'
OR
select
a.col2+'#'+b.col2
from
T1 a CROSS JOIN T1 b
where a.col1='Con'and
b.col1='Arr'
qid & accept id:
(19408757, 19419243)
query:
Cakephp - Adding data to relational database
soup:
to create a select:
\nin your ImagesController
\npublic function add() {\n //\n // ...\n //\n $albums = $this->Image->Album->find('list');\n $this->set('albums', $albums);\n}\n
\nsomewhere in your add.ctp view file
\necho $this->Form->input('album_id');\n
\n
soup wrap:
to create a select:
in your ImagesController
public function add() {
//
// ...
//
$albums = $this->Image->Album->find('list');
$this->set('albums', $albums);
}
somewhere in your add.ctp view file
echo $this->Form->input('album_id');
qid & accept id:
(19436954, 19437127)
query:
Change some value each 5 inserts (MySQL Stored Procedure)
soup:
Will using the following work:
\nCAST((Counter / 5) AS UNSIGNED)\n
\nOR
\nFLOOR(Counter / 5)\n
\nOR
\nFORMAT((Counter / 5),0)\n
\nIt would look something like the following:
\nVALUES \n ("Hello!", \n "Click here.",\n "Can you tell me your name?",\n "example.com/img.jpg",\n "google.com",\n CAST((Counter / 5) AS UNSIGNED),\n 40,\n 2013);\n
\n
soup wrap:
Will using the following work:
CAST((Counter / 5) AS UNSIGNED)
OR
FLOOR(Counter / 5)
OR
FORMAT((Counter / 5),0)
It would look something like the following:
VALUES
("Hello!",
"Click here.",
"Can you tell me your name?",
"example.com/img.jpg",
"google.com",
CAST((Counter / 5) AS UNSIGNED),
40,
2013);
qid & accept id:
(19447701, 19448095)
query:
Change single database datetime format
soup:
use this query
\nSELECT CONVERT(VARCHAR(10), convert(date,'2013/10/18'), 103) AS [DD/MM/YYYY]\n
\nOR
\nSELECT CONVERT(VARCHAR(10), getdate(), 103) AS [DD/MM/YYYY]\n
\n
soup wrap:
use this query
SELECT CONVERT(VARCHAR(10), convert(date,'2013/10/18'), 103) AS [DD/MM/YYYY]
OR
SELECT CONVERT(VARCHAR(10), getdate(), 103) AS [DD/MM/YYYY]
qid & accept id:
(19459274, 19562334)
query:
Sequential Group By in sql server
soup:
Per the tag I added to your question this is a gaps and islands problem.
\nThe best performing solution will likely be
\nWITH T\n AS (SELECT *,\n ID - ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp\n FROM YourTable)\nSELECT [STATUS],\n SUM([VALUE]) AS [SUM(VALUE)]\nFROM T\nGROUP BY [STATUS],\n Grp\nORDER BY MIN(ID)\n
\nIf the ID values were not guaranteed contiguous as stated then you would need to use
\nROW_NUMBER() OVER (ORDER BY [ID]) - \n ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp\n
\nInstead in the CTE definition.
\n\n
soup wrap:
Per the tag I added to your question this is a gaps and islands problem.
The best performing solution will likely be
WITH T
AS (SELECT *,
ID - ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp
FROM YourTable)
SELECT [STATUS],
SUM([VALUE]) AS [SUM(VALUE)]
FROM T
GROUP BY [STATUS],
Grp
ORDER BY MIN(ID)
If the ID values were not guaranteed contiguous as stated then you would need to use
ROW_NUMBER() OVER (ORDER BY [ID]) -
ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp
Instead in the CTE definition.
qid & accept id:
(19499472, 19499896)
query:
sql query get multiple values from same column for one row
soup:
If you are selecting email and phone in subqueries these two joins are probably unnecessary:
\nleft join StaffContactInformation as sci on sr.ID = sci.StaffID\ninner join dictStaffContactTypes as dsct on sci.ContactTypeID = dsct.ID\n
\nBecause of them you are getting as many rows as contacts for specific person.
\nFinal query might look like:
\nSELECT sr.LastName, sr.FirstName, dd.Name, \n Email = (\n select sc.ContactValue FROM StaffContactInformation as sc\n INNER JOIN StaffRoster as roster on sc.StaffID = roster.ID\n where sc.ContactTypeID = 3 and roster.ID = sr.ID\n ),\n Phone = (\n SELECT sc1.ContactValue FROM StaffContactInformation as sc1 \n INNER JOIN StaffRoster as roster on sc1.StaffID = roster.ID\n where sc1.ContactTypeID = 1\n ) \nFROM StaffRoster as sr\nleft join dictDivisions as dd on sr.DivisionID = dd.Id \nwhere (sr.Active = 1 and sr.isContractor = 0 )\nORDER BY sr.LastName, sr.FirstName\n
\n
soup wrap:
If you are selecting email and phone in subqueries these two joins are probably unnecessary:
left join StaffContactInformation as sci on sr.ID = sci.StaffID
inner join dictStaffContactTypes as dsct on sci.ContactTypeID = dsct.ID
Because of them you are getting as many rows as contacts for specific person.
Final query might look like:
SELECT sr.LastName, sr.FirstName, dd.Name,
Email = (
select sc.ContactValue FROM StaffContactInformation as sc
INNER JOIN StaffRoster as roster on sc.StaffID = roster.ID
where sc.ContactTypeID = 3 and roster.ID = sr.ID
),
Phone = (
SELECT sc1.ContactValue FROM StaffContactInformation as sc1
INNER JOIN StaffRoster as roster on sc1.StaffID = roster.ID
where sc1.ContactTypeID = 1
)
FROM StaffRoster as sr
left join dictDivisions as dd on sr.DivisionID = dd.Id
where (sr.Active = 1 and sr.isContractor = 0 )
ORDER BY sr.LastName, sr.FirstName
qid & accept id:
(19532288, 19532788)
query:
MySQL Adding Timestamp Values, Adding Resultset, and Grouping by Date
soup:
You don't need to make GROUP BY start_time, end_time if you have a column date (i suggest you to create column date to groups the 'time diff').
\nhere's my example:
\nmy table (named time)
\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n date | starttime | endtime |\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 |\n2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 |\n
\nthis is my query to display the different time between starttime and endtime:
\nSELECT *, TIMEDIFF(endtime,starttime) AS duration FROM time\n
\nit will return :
\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n date | starttime | endtime | duration |\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 | 08:30:00 |\n2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 | 08:30:00 |\n
\nthat's if you have a date column as different column from starttime and endtime.
\nyou didn't give me the structure of your table, so i can't see you problem clearly.
\nUPDATE :
\nI imagine that you have a table like this :\n
\nAnd may be your matter is : calculate the time between starting time and ending time from a day of a user that the user could start and stop in anytime (at that day).
\nI run this query to do that :
\nSELECT *, TIMEDIFF(MAX(end),MIN(start)) AS duration FROM time\nGROUP BY user_id, date \nORDER BY date ASC;\n
\nIt will return this:
\n
\nor if you run this query :
\nSELECT \nuser_id,\nMIN(start) AS start, \nMAX(end) AS end, \nTIMEDIFF(MAX(end),MIN(start)) AS duration \nFROM time\nGROUP BY user_id, date \nORDER BY date ASC\n
\nit will return this :
\n
\n
soup wrap:
You don't need to make GROUP BY start_time, end_time if you have a column date (i suggest you to create column date to groups the 'time diff').
here's my example:
my table (named time)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
date | starttime | endtime |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 |
2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 |
this is my query to display the different time between starttime and endtime:
SELECT *, TIMEDIFF(endtime,starttime) AS duration FROM time
it will return :
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
date | starttime | endtime | duration |
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 | 08:30:00 |
2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 | 08:30:00 |
that's if you have a date column as different column from starttime and endtime.
you didn't give me the structure of your table, so i can't see you problem clearly.
UPDATE :
I imagine that you have a table like this :
And may be your matter is : calculate the time between starting time and ending time from a day of a user that the user could start and stop in anytime (at that day).
I run this query to do that :
SELECT *, TIMEDIFF(MAX(end),MIN(start)) AS duration FROM time
GROUP BY user_id, date
ORDER BY date ASC;
It will return this:
or if you run this query :
SELECT
user_id,
MIN(start) AS start,
MAX(end) AS end,
TIMEDIFF(MAX(end),MIN(start)) AS duration
FROM time
GROUP BY user_id, date
ORDER BY date ASC
it will return this :

qid & accept id:
(19532801, 19533387)
query:
Inserting a TIME value
soup:
For an alternative, start_time field could store "14:00:00" directly.
\ne.g
\nUPDATE TABLE SET start_time= STR_TO_DATE('14:00:00', '%k:%i:%s');\n
\nwhile you retrieve the data, the sql may look like below:
\nSELECT TIME_FORMAT(start_time, '%r') FROM TABLE\n
\nHowever, it is still a little different from your expectation, the result will be 2:00:00 PM
\n
soup wrap:
For an alternative, start_time field could store "14:00:00" directly.
e.g
UPDATE TABLE SET start_time= STR_TO_DATE('14:00:00', '%k:%i:%s');
while you retrieve the data, the sql may look like below:
SELECT TIME_FORMAT(start_time, '%r') FROM TABLE
However, it is still a little different from your expectation, the result will be 2:00:00 PM
qid & accept id:
(19562212, 19562538)
query:
SQL - select row with most matching columns
soup:
This should do the trick:
\nSELECT * FROM (\n SELECT *, CASE application WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END\n + CASE dstIP WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END\n + CASE dstPort WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END AS Matches\n FROM table WHERE Matches IS NOT NULL\n) GROUP BY application, dstIP, dstPort ORDER BY Matches DESC;\n
\nMatches column will count all column match or be NULL when mismatch.
\nGROUP BY without aggregate functions will catch first row (I hope!), which is max match because inner query is sorted descending.
\nEDIT: New version:
\nSELECT *, CASE WHEN application IS ? THEN 1 WHEN application IS NULL THEN 0 ELSE NULL END\n + CASE WHEN dstIP IS ? THEN 1 WHEN dstIP IS NULL THEN 0 ELSE NULL END\n + CASE WHEN dstPort IS ? THEN 1 WHEN dstPort IS NULL THEN 0 ELSE NULL END AS Matches\nFROM t\nWHERE Matches IS NOT NULL\nORDER BY Matches DESC\nLIMIT 1;\n
\nAdvantages: You can compare NULL also. Disvantages: only 1 match is showed when equally ranked matches are found.
\n
soup wrap:
This should do the trick:
SELECT * FROM (
SELECT *, CASE application WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END
+ CASE dstIP WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END
+ CASE dstPort WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END AS Matches
FROM table WHERE Matches IS NOT NULL
) GROUP BY application, dstIP, dstPort ORDER BY Matches DESC;
Matches column will count all column match or be NULL when mismatch.
GROUP BY without aggregate functions will catch first row (I hope!), which is max match because inner query is sorted descending.
EDIT: New version:
SELECT *, CASE WHEN application IS ? THEN 1 WHEN application IS NULL THEN 0 ELSE NULL END
+ CASE WHEN dstIP IS ? THEN 1 WHEN dstIP IS NULL THEN 0 ELSE NULL END
+ CASE WHEN dstPort IS ? THEN 1 WHEN dstPort IS NULL THEN 0 ELSE NULL END AS Matches
FROM t
WHERE Matches IS NOT NULL
ORDER BY Matches DESC
LIMIT 1;
Advantages: You can compare NULL also. Disvantages: only 1 match is showed when equally ranked matches are found.
qid & accept id:
(19577349, 19577436)
query:
SQL select all from one table of joint tables
soup:
In the group by you need to have all the column not aggregated.
\nSo you query has to become:
\n SELECT FLIGHTS.*, \n SEATS_MAX-COUNT(BOOKING_ID) \n FROM FLIGHTS \n INNER JOIN PLANES \n ON FLIGHTS.PLANE_ID = PLANES.PLANE_ID \n LEFT JOIN BOOKINGS \n ON FLIGHTS.FLIGHT_ID = BOOKINGS.FLIGHT_ID \n GROUP BY FLIGHTS.Column1,\n ...\n FLIGHTS.ColumN,\n SEATS_MAX;\n
\nEdit:\nTo list all columns of you table you can use the following query
\n SELECT 'FLIGHTS.' || column_name\n FROM user_tab_columns\n WHERE table_name = 'FLIGHTS'\nORDER BY column_id;\n
\nThis should make your life a bit easier, then copy and paste
\n
soup wrap:
In the group by you need to have all the column not aggregated.
So you query has to become:
SELECT FLIGHTS.*,
SEATS_MAX-COUNT(BOOKING_ID)
FROM FLIGHTS
INNER JOIN PLANES
ON FLIGHTS.PLANE_ID = PLANES.PLANE_ID
LEFT JOIN BOOKINGS
ON FLIGHTS.FLIGHT_ID = BOOKINGS.FLIGHT_ID
GROUP BY FLIGHTS.Column1,
...
FLIGHTS.ColumN,
SEATS_MAX;
Edit:
To list all columns of you table you can use the following query
SELECT 'FLIGHTS.' || column_name
FROM user_tab_columns
WHERE table_name = 'FLIGHTS'
ORDER BY column_id;
This should make your life a bit easier, then copy and paste
qid & accept id:
(19582011, 19582546)
query:
How can I copy one column to from one table to another in SQL Server
soup:
\n\nHow can I abort those active transactions so the task can be successful ?
\n
\n
\nYou can't, because it's the UPDATE FROM transaction.
\nYou can either increase max size of the log file:
\nALTER DATABASE DB_NAME\nMODIFY FILE (NAME=LOG_FILE_NAME,MAXSIZE=UNLIMITED);\n
\nOr you can try something like this:
\nWHILE EXISTS\n(select *\nfrom ExceptionRow\n inner join HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5\nwhere ExceptionRow.Message is null\n AND not HashFP.MessageFP is null\n)\nUPDATE TOP (1000) ExceptionRow\nSET Exceptionrow.Message = HashFP.MessageFP\nFROM ExceptionRow \n INNER JOIN HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5\nWHERE ExceptionRow.Message IS NULL\n AND NOT HashFP.MessageFP IS NULL\n
\nIF the database has SIMPLE recovery model this should work, if FULL or BULK_LOAD you need also do backup of transaction log in every iteration.
\n
soup wrap:
How can I abort those active transactions so the task can be successful ?
You can't, because it's the UPDATE FROM transaction.
You can either increase max size of the log file:
ALTER DATABASE DB_NAME
MODIFY FILE (NAME=LOG_FILE_NAME,MAXSIZE=UNLIMITED);
Or you can try something like this:
WHILE EXISTS
(select *
from ExceptionRow
inner join HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5
where ExceptionRow.Message is null
AND not HashFP.MessageFP is null
)
UPDATE TOP (1000) ExceptionRow
SET Exceptionrow.Message = HashFP.MessageFP
FROM ExceptionRow
INNER JOIN HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5
WHERE ExceptionRow.Message IS NULL
AND NOT HashFP.MessageFP IS NULL
IF the database has SIMPLE recovery model this should work, if FULL or BULK_LOAD you need also do backup of transaction log in every iteration.
qid & accept id:
(19582702, 19582887)
query:
Get data from object in SQL
soup:
How the returning result is displayed heavily depends on a client you are using to execute that query. It would be better if you explicitly specified those properties of an object instance you want to be displayed. For example:
\ncreate or replace type T_Obj as object(\n prop1 number,\n prop2 date\n) \n\ncreate or replace function F_1(\n p_var1 in number,\n p_var2 in date\n ) return t_obj is\n begin\n return t_obj(p_var1, p_var2);\n end;\n\nselect t.obj.prop1\n , t.obj.prop2\n from (select F_1(1, sysdate) as obj\n from dual) t\n
\nresult:
\n OBJ.PROP1 OBJ.PROP2\n---------- -----------\n 1 25-Oct-2013\n
\n
soup wrap:
How the returning result is displayed heavily depends on a client you are using to execute that query. It would be better if you explicitly specified those properties of an object instance you want to be displayed. For example:
create or replace type T_Obj as object(
prop1 number,
prop2 date
)
create or replace function F_1(
p_var1 in number,
p_var2 in date
) return t_obj is
begin
return t_obj(p_var1, p_var2);
end;
select t.obj.prop1
, t.obj.prop2
from (select F_1(1, sysdate) as obj
from dual) t
result:
OBJ.PROP1 OBJ.PROP2
---------- -----------
1 25-Oct-2013
qid & accept id:
(19645073, 19645805)
query:
SQL for list of winners that have won at least a specific percentage of times
soup:
For one user:
\nSELECT ifnull(wins, 0) wins, ifnull(loses,0) loses, \n ifnull(wins, 0)+ifnull(loses,0) total, \n ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent\nFROM (\nSELECT\n (SELECT COUNT(*) FROM user_versus WHERE id_user_winner = 6 ) wins,\n (SELECT COUNT(*) FROM user_versus WHERE id_user_loser = 6 ) loses\n) subqry\n
\nFor all users:
\nSELECT id_user_winner AS id_user, \n ifnull(wins, 0) wins\n ifnull(loses,0) loses\n ifnull(wins, 0)+ifnull(loses,0) total, \n ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent\nFROM (\n SELECT id_user_winner AS id_user FROM user_versus \n UNION\n SELECT id_user_loser FROM user_versus \n) u\nLEFT JOIN\nFROM (\n SELECT id_user_winner, count(*) wins\n FROM user_versus \n GROUP BY id_user_winner\n) w\nON u.id_user = id_user_winner\nLEFT JOIN (\n SELECT id_user_loser, count(*) loses\n FROM user_versus \n GROUP BY id_user_loser\n) l\nON u.id_user = l.id_user_loser\n
\n
soup wrap:
For one user:
SELECT ifnull(wins, 0) wins, ifnull(loses,0) loses,
ifnull(wins, 0)+ifnull(loses,0) total,
ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent
FROM (
SELECT
(SELECT COUNT(*) FROM user_versus WHERE id_user_winner = 6 ) wins,
(SELECT COUNT(*) FROM user_versus WHERE id_user_loser = 6 ) loses
) subqry
For all users:
SELECT id_user_winner AS id_user,
ifnull(wins, 0) wins
ifnull(loses,0) loses
ifnull(wins, 0)+ifnull(loses,0) total,
ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent
FROM (
SELECT id_user_winner AS id_user FROM user_versus
UNION
SELECT id_user_loser FROM user_versus
) u
LEFT JOIN
FROM (
SELECT id_user_winner, count(*) wins
FROM user_versus
GROUP BY id_user_winner
) w
ON u.id_user = id_user_winner
LEFT JOIN (
SELECT id_user_loser, count(*) loses
FROM user_versus
GROUP BY id_user_loser
) l
ON u.id_user = l.id_user_loser
qid & accept id:
(19663813, 19663954)
query:
MySQL: Counting Latest Occurrences of Field in Another Table
soup:
select status_id, count(1) cnt\nfrom statushistory h\nwhere not exists \n (select 1 from statushistory h1 \n where h1.project_id=h.project_id and h1.date_added>h.date_added)\ngroup by status_id\n
\nHere it is to test in SQLfiddle
\nThis is its version, checking projects table:
\nselect status_id, count(1) cnt\nfrom statushistory h, projects p\nwhere p.project_id=h.project_id and p.active=1\n and not exists \n (select 1 from statushistory h1 \n where h1.project_id=h.project_id and h1.date_added>h.date_added)\ngroup by status_id\n
\nSee it in fiddle here
\nOf course to run this effectively, you definitely need index on (project_id,date_added) and maybe on status_id too (see if its presence changes query executin plan).
\nI am not sure if low perfomace caused by subquery in where-clause is a myth or not, but here is a version without it (based partly on Mosty Mostacho's code). You are welcome to compare these queries and tell us which is performing better.
\nselect h.status_id, count(*) cnt FROM (\n select project_id, max(date_added) maxdate \n from statushistory\n group by project_id\n) h1, statushistory h, projects p\nwhere h.project_id=h1.project_id and h.date_added=h1.maxdate\n and p.project_id=h.project_id and p.active=1\ngroup by h.status_id\n
\nSee it in fiddle here
\n
soup wrap:
select status_id, count(1) cnt
from statushistory h
where not exists
(select 1 from statushistory h1
where h1.project_id=h.project_id and h1.date_added>h.date_added)
group by status_id
Here it is to test in SQLfiddle
This is its version, checking projects table:
select status_id, count(1) cnt
from statushistory h, projects p
where p.project_id=h.project_id and p.active=1
and not exists
(select 1 from statushistory h1
where h1.project_id=h.project_id and h1.date_added>h.date_added)
group by status_id
See it in fiddle here
Of course to run this effectively, you definitely need index on (project_id,date_added) and maybe on status_id too (see if its presence changes query executin plan).
I am not sure if low perfomace caused by subquery in where-clause is a myth or not, but here is a version without it (based partly on Mosty Mostacho's code). You are welcome to compare these queries and tell us which is performing better.
select h.status_id, count(*) cnt FROM (
select project_id, max(date_added) maxdate
from statushistory
group by project_id
) h1, statushistory h, projects p
where h.project_id=h1.project_id and h.date_added=h1.maxdate
and p.project_id=h.project_id and p.active=1
group by h.status_id
See it in fiddle here
qid & accept id:
(19680651, 19681047)
query:
FULL OUTER JOIN with temp tables
soup:
You can still use a FULL JOIN, just use ISNULL on the second join condition:
\nSELECT RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),\n EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),\n t.FirstName,\n t.MiddleName,\n t.LastName,\n t.SSN,\n t.EmployeeCode,\n t.TaxName,\n t.Amount,\n t.GrossPay,\n t.CompanyId,\n e.EarningDescription,\n EarningAmount = e.Amount,\n d.DeductionDescription,\n DeductionAmount = d.Amount\nFROM @Tax t\n FULL JOIN @Earnings e\n ON t.EmployeeID = e.EmployeeID\n AND t.RowNumber = e.RowNumber\n FULL JOIN @Deductions D\n ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)\n AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);\n
\n
\nWorking example below (all columns other than those needed for joins are null though
\n
\nDECLARE @Tax Table \n(\n RowNumber int , \n FirstName nvarchar(50),\n MiddleName nvarchar(50),\n LastName nvarchar(50),\n SSN nvarchar(50),\n EmployeeCode nvarchar(50),\n TaxName nvarchar(50),\n Amount decimal(18,2), \n GrossPay decimal(18,2),\n CompanyId int,\n EmployeeId int\n)\nINSERT @Tax (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1), (3, 1), (4, 1);\n\nDECLARE @Earnings TABLE\n(\n RowNumber int , \n EmployeeId int, \n EarningDescription nvarchar(50), \n Amount decimal(18,2)\n)\nINSERT @Earnings (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1);\n\nDECLARE @Deductions TABLE \n(\n RowNumber int , \n EmployeeId int, \n DeductionDescription nvarchar(50), \n Amount decimal(18,2)\n) \nINSERT @Deductions (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1); \n\n\nSELECT RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),\n EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),\n t.FirstName,\n t.MiddleName,\n t.LastName,\n t.SSN,\n t.EmployeeCode,\n t.TaxName,\n t.Amount,\n t.GrossPay,\n t.CompanyId,\n e.EarningDescription,\n EarningAmount = e.Amount,\n d.DeductionDescription,\n DeductionAmount = d.Amount\nFROM @Tax t\n FULL JOIN @Earnings e\n ON t.EmployeeID = e.EmployeeID\n AND t.RowNumber = e.RowNumber\n FULL JOIN @Deductions D\n ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)\n AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);\n
\n
soup wrap:
You can still use a FULL JOIN, just use ISNULL on the second join condition:
SELECT RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),
EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),
t.FirstName,
t.MiddleName,
t.LastName,
t.SSN,
t.EmployeeCode,
t.TaxName,
t.Amount,
t.GrossPay,
t.CompanyId,
e.EarningDescription,
EarningAmount = e.Amount,
d.DeductionDescription,
DeductionAmount = d.Amount
FROM @Tax t
FULL JOIN @Earnings e
ON t.EmployeeID = e.EmployeeID
AND t.RowNumber = e.RowNumber
FULL JOIN @Deductions D
ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)
AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);
Working example below (all columns other than those needed for joins are null though
DECLARE @Tax Table
(
RowNumber int ,
FirstName nvarchar(50),
MiddleName nvarchar(50),
LastName nvarchar(50),
SSN nvarchar(50),
EmployeeCode nvarchar(50),
TaxName nvarchar(50),
Amount decimal(18,2),
GrossPay decimal(18,2),
CompanyId int,
EmployeeId int
)
INSERT @Tax (RowNumber, EmployeeID)
VALUES (1, 1), (2, 1), (3, 1), (4, 1);
DECLARE @Earnings TABLE
(
RowNumber int ,
EmployeeId int,
EarningDescription nvarchar(50),
Amount decimal(18,2)
)
INSERT @Earnings (RowNumber, EmployeeID)
VALUES (1, 1), (2, 1);
DECLARE @Deductions TABLE
(
RowNumber int ,
EmployeeId int,
DeductionDescription nvarchar(50),
Amount decimal(18,2)
)
INSERT @Deductions (RowNumber, EmployeeID)
VALUES (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1);
SELECT RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),
EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),
t.FirstName,
t.MiddleName,
t.LastName,
t.SSN,
t.EmployeeCode,
t.TaxName,
t.Amount,
t.GrossPay,
t.CompanyId,
e.EarningDescription,
EarningAmount = e.Amount,
d.DeductionDescription,
DeductionAmount = d.Amount
FROM @Tax t
FULL JOIN @Earnings e
ON t.EmployeeID = e.EmployeeID
AND t.RowNumber = e.RowNumber
FULL JOIN @Deductions D
ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)
AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);
qid & accept id:
(19690325, 19690486)
query:
SQL Query to get recursive count of employees under each manager
soup:
First off, an important note: The first row of the Emp_Table, where Emp_id==Manager_Id==1 is not only meaningless but will also cause infinite recursion. I suggest you remove it.
\nIn order to provide an answer, however, I first created a view that eliminates such invalid entries, and used that instead of Emp_Table:
\ncreate view valid_mng as \nselect Emp_Id,Manager_id from Emp_Table\nwhere Emp_Id<>Manager_Id\n
\nSo it boils down to the following, with a little help of a recursive CTE:
\nWith cte as (\n select Emp_Id,Manager_id from valid_mng\n union all\n select c.Emp_Id,e.Manager_Id \n from cte c join valid_mng e on (c.Manager_Id=e.Emp_Id)\n )\n\nselect m.Manager_Id,count(e.Emp_Id) as Count_of_Employees\nfrom [Execute] m\nleft join cte e on (e.Manager_Id=m.Manager_Id)\ngroup by m.Manager_Id\n
\nIf you eventually remove the offending row(s), or better yet set Manager_Id=NULL as HABO suggested, just ignore the valid_mng view and replace it with Emp_Table everywhere.
\nAlso a side note: Execute is a reserved word in MSSQL. Avoiding the use of reserved words in user obect naming is always a good practice.
\n
soup wrap:
First off, an important note: The first row of the Emp_Table, where Emp_id==Manager_Id==1 is not only meaningless but will also cause infinite recursion. I suggest you remove it.
In order to provide an answer, however, I first created a view that eliminates such invalid entries, and used that instead of Emp_Table:
create view valid_mng as
select Emp_Id,Manager_id from Emp_Table
where Emp_Id<>Manager_Id
So it boils down to the following, with a little help of a recursive CTE:
With cte as (
select Emp_Id,Manager_id from valid_mng
union all
select c.Emp_Id,e.Manager_Id
from cte c join valid_mng e on (c.Manager_Id=e.Emp_Id)
)
select m.Manager_Id,count(e.Emp_Id) as Count_of_Employees
from [Execute] m
left join cte e on (e.Manager_Id=m.Manager_Id)
group by m.Manager_Id
If you eventually remove the offending row(s), or better yet set Manager_Id=NULL as HABO suggested, just ignore the valid_mng view and replace it with Emp_Table everywhere.
Also a side note: Execute is a reserved word in MSSQL. Avoiding the use of reserved words in user obect naming is always a good practice.
qid & accept id:
(19707228, 19707902)
query:
XML/SQL - Adding a string at the end of each line in individual fields
soup:
In this case, there is no real distinguishing between one newline or two newlines
\nDoes this do the job?
\nselect replace(details, E'\n', ''||E'\n') from personal_details\n
\nEDIT:\nAfter reading your latest edit with extra care to the desired result,\nI also suggest a double replace:
\nselect replace(\n replace(details, E'\n\n', ''||E'\n'),\nE'\n', ' '||E'\n')\nfrom personal_details\n
\nThe inner replace which runs first, replaces all double newline chars with your desired extra string just once, plus one newline,
\nwhile the outer replace further adds the desired string in all newlines encountered.
\nIf you want single line output in your file, you can just remove the last ||E'\n' of the outer replace
\n
soup wrap:
In this case, there is no real distinguishing between one newline or two newlines
Does this do the job?
select replace(details, E'\n', ''||E'\n') from personal_details
EDIT:
After reading your latest edit with extra care to the desired result,
I also suggest a double replace:
select replace(
replace(details, E'\n\n', ''||E'\n'),
E'\n', ' '||E'\n')
from personal_details
The inner replace which runs first, replaces all double newline chars with your desired extra string just once, plus one newline,
while the outer replace further adds the desired string in all newlines encountered.
If you want single line output in your file, you can just remove the last ||E'\n' of the outer replace
qid & accept id:
(19716510, 19716638)
query:
Add Select and Write Privileges to User for Specific Table Names
soup:
If you want to grant the privileges directly to the user
\nGRANT select, update, insert \n ON table_owner.feed_data_a\n TO user_a;\nGRANT select, update, insert \n ON table_owner.feed_data_b\n TO user_a;\n
\nMore commonly, though, you would create a role, grant the role to the user, and grant the privileges to the role. That makes it easier in the future when there is a new user created that you want to have the same privileges as USER_A to just grant a couple of roles rather than figuring out all the privileges that potentially need to be granted. It also makes it easier as new tables are created and new privileges are granted to ensure that users that should have the same privileges continue to have the same privileges.
\nCREATE ROLE feed_data_role;\n\nGRANT select, update, insert \n ON table_owner.feed_data_a\n TO feed_data_role;\nGRANT select, update, insert \n ON table_owner.feed_data_b\n TO feed_data_role;\n\nGRANT feed_data_role\n TO user_a\n
\n
soup wrap:
If you want to grant the privileges directly to the user
GRANT select, update, insert
ON table_owner.feed_data_a
TO user_a;
GRANT select, update, insert
ON table_owner.feed_data_b
TO user_a;
More commonly, though, you would create a role, grant the role to the user, and grant the privileges to the role. That makes it easier in the future when there is a new user created that you want to have the same privileges as USER_A to just grant a couple of roles rather than figuring out all the privileges that potentially need to be granted. It also makes it easier as new tables are created and new privileges are granted to ensure that users that should have the same privileges continue to have the same privileges.
CREATE ROLE feed_data_role;
GRANT select, update, insert
ON table_owner.feed_data_a
TO feed_data_role;
GRANT select, update, insert
ON table_owner.feed_data_b
TO feed_data_role;
GRANT feed_data_role
TO user_a
qid & accept id:
(19718193, 19718374)
query:
SQL query to return rows from one table that don't exist in another
soup:
Personally, I'd use a MINUS
\nSELECT *\n FROM code_mapping\n WHERE soure_system_id = '&LHDNUMBER'\nMINUS\nSELECT *\n FROM dm.code_mapping@prod_check\n
\nMINUS handles NULL comparisons automatically (a NULL on the source automatically matches a NULL on the target).
\nIf you want to list all differences between the two tables (i.e. list all rows that exist in dev but not prod and prod but not dev), you can add a UNION ALL
\n(SELECT a.*, 'In dev but not prod' descriptio\n FROM dev_table a\n MINUS \n SELECT a.*, 'In dev but not prod' description\n FROM prod_table a)\nUNION ALL\n(SELECT a.*, 'In prod but not dev' descriptio\n FROM prod_table a\n MINUS \n SELECT a.*, 'In prod but not dev' description\n FROM dev_table a)\n
\n
soup wrap:
Personally, I'd use a MINUS
SELECT *
FROM code_mapping
WHERE soure_system_id = '&LHDNUMBER'
MINUS
SELECT *
FROM dm.code_mapping@prod_check
MINUS handles NULL comparisons automatically (a NULL on the source automatically matches a NULL on the target).
If you want to list all differences between the two tables (i.e. list all rows that exist in dev but not prod and prod but not dev), you can add a UNION ALL
(SELECT a.*, 'In dev but not prod' descriptio
FROM dev_table a
MINUS
SELECT a.*, 'In dev but not prod' description
FROM prod_table a)
UNION ALL
(SELECT a.*, 'In prod but not dev' descriptio
FROM prod_table a
MINUS
SELECT a.*, 'In prod but not dev' description
FROM dev_table a)
qid & accept id:
(19748723, 19748863)
query:
how I can use mysql date greater in order by case?
soup:
Are you looking for something like this?
\nSELECT *\n FROM users\n ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), id\n
\nHere is SQLFiddle demo
\n
\nBased on your comments
\nSELECT *, subs_end_datetime <= CURDATE() aa\n FROM users\n ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), subs_end_datetime DESC\n
\nHere is SQLFiddle demo
\n
soup wrap:
Are you looking for something like this?
SELECT *
FROM users
ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), id
Here is SQLFiddle demo
Based on your comments
SELECT *, subs_end_datetime <= CURDATE() aa
FROM users
ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), subs_end_datetime DESC
Here is SQLFiddle demo
qid & accept id:
(19752084, 19752163)
query:
SQL case statement in join condition
soup:
You could try something like :
\nAND (table1.counter IS NULL OR table1.counter=table2.counter)\n
\nInstead of :
\nAND table1.counter=table2.counter\n
\nIn your first query.
\n
soup wrap:
You could try something like :
AND (table1.counter IS NULL OR table1.counter=table2.counter)
Instead of :
AND table1.counter=table2.counter
In your first query.
qid & accept id:
(19765962, 19767746)
query:
Calculating days to excluding weekends (Monday to Friday) in SQL Server
soup:
I would always recommend a Calendar table, then you can simply use:
\nSELECT COUNT(*)\nFROM dbo.CalendarTable\nWHERE IsWorkingDay = 1\nAND [Date] > @StartDate\nAND [Date] <= @EndDate;\n
\nSince SQL has no knowledge of national holidays for example the number of weekdays between two dates does not always represent the number of working days. This is why a calendar table is a must for most databases. They do not take a lot of memory and simplify a lot of queries.
\nBut if this is not an option then you can generate a table of dates relatively easily on the fly and use this
\nSET DATEFIRST 1;\nDECLARE @StartDate DATETIME = '20131103', \n @EndDate DATETIME = '20131104';\n\n-- GENERATE A LIST OF ALL DATES BETWEEN THE START DATE AND THE END DATE\nWITH AllDates AS\n( SELECT TOP (DATEDIFF(DAY, @StartDate, @EndDate))\n D = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.Object_ID), @StartDate)\n FROM sys.all_objects a\n CROSS JOIN sys.all_objects b\n)\nSELECT WeekDays = COUNT(*)\nFROM AllDates\nWHERE DATEPART(WEEKDAY, D) NOT IN (6, 7);\n
\n
\nEDIT
\nIf you need to calculate the difference between two date columns you can still use your calendar table as so:
\nSELECT t.ID,\n t.Date1,\n t.Date2,\n WorkingDays = COUNT(c.DateKey)\nFROM TestTable t\n LEFT JOIN dbo.Calendar c\n ON c.DateKey >= t.Date1\n AND c.DateKey < t.Date2\n AND c.IsWorkingDay = 1\nGROUP BY t.ID, t.Date1, t.Date2;\n
\n\n
soup wrap:
I would always recommend a Calendar table, then you can simply use:
SELECT COUNT(*)
FROM dbo.CalendarTable
WHERE IsWorkingDay = 1
AND [Date] > @StartDate
AND [Date] <= @EndDate;
Since SQL has no knowledge of national holidays for example the number of weekdays between two dates does not always represent the number of working days. This is why a calendar table is a must for most databases. They do not take a lot of memory and simplify a lot of queries.
But if this is not an option then you can generate a table of dates relatively easily on the fly and use this
SET DATEFIRST 1;
DECLARE @StartDate DATETIME = '20131103',
@EndDate DATETIME = '20131104';
-- GENERATE A LIST OF ALL DATES BETWEEN THE START DATE AND THE END DATE
WITH AllDates AS
( SELECT TOP (DATEDIFF(DAY, @StartDate, @EndDate))
D = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.Object_ID), @StartDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
)
SELECT WeekDays = COUNT(*)
FROM AllDates
WHERE DATEPART(WEEKDAY, D) NOT IN (6, 7);
EDIT
If you need to calculate the difference between two date columns you can still use your calendar table as so:
SELECT t.ID,
t.Date1,
t.Date2,
WorkingDays = COUNT(c.DateKey)
FROM TestTable t
LEFT JOIN dbo.Calendar c
ON c.DateKey >= t.Date1
AND c.DateKey < t.Date2
AND c.IsWorkingDay = 1
GROUP BY t.ID, t.Date1, t.Date2;
qid & accept id:
(19835090, 21036491)
query:
Replace multiple characters from string without using any nested replace functions
soup:
I had created a SPLIT function to implement this because I need to implement this operation multiple time in PROCEDURE
\nSPLIT FUNCTION
\ncreate function [dbo].[Split](@String varchar(8000), @Delimiter char(1)) \nreturns @temptable TABLE (items varchar(8000)) \nas \nbegin \n declare @idx int \n declare @slice varchar(8000) \n\n select @idx = 1 \n if len(@String)<1 or @String is null return \n\n while @idx!= 0 \n begin \n set @idx = charindex(@Delimiter,@String) \n if @idx!=0 \n set @slice = left(@String,@idx - 1) \n else \n set @slice = @String \n\n if(len(@slice)>0) \n insert into @temptable(Items) values(@slice) \n\n set @String = right(@String,len(@String) - @idx) \n if len(@String) = 0 break \n end \nreturn \nend\n
\nCode used in procedure:
\nDECLARE @NEWSTRING VARCHAR(100) \nSET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;\nSELECT @NEWSTRING = REPLACE(@NEWSTRING, items, '~') FROM dbo.Split('+,-,*,/,%,(,)', ',');\nPRINT @NEWSTRING\n
\nOUTPUT
\n~N_100~~6858~~~6858~~N_100~0_2~~N_35\n
\n
soup wrap:
I had created a SPLIT function to implement this because I need to implement this operation multiple time in PROCEDURE
SPLIT FUNCTION
create function [dbo].[Split](@String varchar(8000), @Delimiter char(1))
returns @temptable TABLE (items varchar(8000))
as
begin
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end
Code used in procedure:
DECLARE @NEWSTRING VARCHAR(100)
SET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;
SELECT @NEWSTRING = REPLACE(@NEWSTRING, items, '~') FROM dbo.Split('+,-,*,/,%,(,)', ',');
PRINT @NEWSTRING
OUTPUT
~N_100~~6858~~~6858~~N_100~0_2~~N_35
qid & accept id:
(19837655, 19837754)
query:
SQL Server query dry run
soup:
Use an SQL transaction to make your changes then back them out.
\nBefore you execute your script:
\nBEGIN TRANSACTION;\n
\nAfter you execute your script and have done your checking:
\nROLLBACK TRANSACTION;\n
\nEvery change in your script will then be undone.
\nNote: Make sure you don't have a COMMIT in your script!
\n
soup wrap:
Use an SQL transaction to make your changes then back them out.
Before you execute your script:
BEGIN TRANSACTION;
After you execute your script and have done your checking:
ROLLBACK TRANSACTION;
Every change in your script will then be undone.
Note: Make sure you don't have a COMMIT in your script!
qid & accept id:
(19866409, 19866600)
query:
How to add one nanosecond to a timestamp in PL/SQL
soup:
interval day to second literal can be used to add fractional seconds to a timestamp value:
\nIn this example we add one nanosecond:
\nselect timestamp '2013-11-11 22:10:10.111111111' + \n interval '0 00:00:00.000000001' day to second(9) as res\n from dual\n
\nResult:
\nRES \n-------------------------------\n11-NOV-13 10.10.10.111111112 PM \n
\nNote: When you are using to_timestamp() function to convert character literal to a value of timestamp data type, it's a good idea to specify a format mask(not relay on NLS settings).
\nselect TO_TIMESTAMP('11-11-2013 22:10:10:111111111', 'dd-mm-yyyy hh24:mi:ss:ff9') + \n interval '0 00:00:00.000000001' day to second(9) as res\n from dual\n
\nResult:
\nRES \n-------------------------------\n11-NOV-13 10.10.10.111111112 PM \n
\nNote: As you intend to process values of timestamp data type using PL/SQL you should be aware of the following. The default precision of fractional seconds for values of timestamp data type, in PL/SQL, is 6 not 9 as it is in SQL, so you may expect truncation of fractional second. In order to avoid truncation of fractional seconds use timestamp_unconstrained and dsinterval_unconstrained data types instead of timestamp and interval day to second:
\ndeclare\n l_tmstmp timestamp_unconstrained := to_timestamp('11-11-2013 22:10:10:111111111',\n 'dd-mm-yyyy hh24:mi:ss:ff9');\n l_ns dsinterval_unconstrained := interval '0.000000001' second;\nbegin\n l_tmstmp := l_tmstmp + l_ns;\n dbms_output.put_line(to_char(l_tmstmp, 'dd-mm-yyyy hh24:mi:ss:ff9'));\nend;\n
\nResult:
\nanonymous block completed\n11-11-2013 22:10:10:111111112\n
\n
soup wrap:
interval day to second literal can be used to add fractional seconds to a timestamp value:
In this example we add one nanosecond:
select timestamp '2013-11-11 22:10:10.111111111' +
interval '0 00:00:00.000000001' day to second(9) as res
from dual
Result:
RES
-------------------------------
11-NOV-13 10.10.10.111111112 PM
Note: When you are using to_timestamp() function to convert character literal to a value of timestamp data type, it's a good idea to specify a format mask(not relay on NLS settings).
select TO_TIMESTAMP('11-11-2013 22:10:10:111111111', 'dd-mm-yyyy hh24:mi:ss:ff9') +
interval '0 00:00:00.000000001' day to second(9) as res
from dual
Result:
RES
-------------------------------
11-NOV-13 10.10.10.111111112 PM
Note: As you intend to process values of timestamp data type using PL/SQL you should be aware of the following. The default precision of fractional seconds for values of timestamp data type, in PL/SQL, is 6 not 9 as it is in SQL, so you may expect truncation of fractional second. In order to avoid truncation of fractional seconds use timestamp_unconstrained and dsinterval_unconstrained data types instead of timestamp and interval day to second:
declare
l_tmstmp timestamp_unconstrained := to_timestamp('11-11-2013 22:10:10:111111111',
'dd-mm-yyyy hh24:mi:ss:ff9');
l_ns dsinterval_unconstrained := interval '0.000000001' second;
begin
l_tmstmp := l_tmstmp + l_ns;
dbms_output.put_line(to_char(l_tmstmp, 'dd-mm-yyyy hh24:mi:ss:ff9'));
end;
Result:
anonymous block completed
11-11-2013 22:10:10:111111112
qid & accept id:
(19872492, 19882626)
query:
Detect if MySQL has duplicates when inserting
soup:
In order to be able to change a value of value1 with ON DUPLICATE KEY clause you have to have either a UNIQUE constraint or a PRIMARY KEY on (value2, value3).
\nALTER TABLE table1 ADD UNIQUE (value2, value3);\n
\nNow to simplify your insert statement you can also use VALUES() in ON DUPLICATE KEY like this
\nINSERT INTO Table1 (`value1`, `value2`, `value3`)\nVALUES ('$valueForValue1', '$valueForValue2', '$valueForValue3')\nON DUPLICATE KEY UPDATE value1 = VALUES(value1);\n
\nHere is SQLFIddle demo
\n
soup wrap:
In order to be able to change a value of value1 with ON DUPLICATE KEY clause you have to have either a UNIQUE constraint or a PRIMARY KEY on (value2, value3).
ALTER TABLE table1 ADD UNIQUE (value2, value3);
Now to simplify your insert statement you can also use VALUES() in ON DUPLICATE KEY like this
INSERT INTO Table1 (`value1`, `value2`, `value3`)
VALUES ('$valueForValue1', '$valueForValue2', '$valueForValue3')
ON DUPLICATE KEY UPDATE value1 = VALUES(value1);
Here is SQLFIddle demo
qid & accept id:
(19902526, 19902958)
query:
table into a table row
soup:
Stop. Don't create tables for each category. Use a proper schema design from the beginning. It will pay off big time by allowing you normally maintain and query your data.
\nIn your case the schema might look like
\nCREATE TABLE categories\n(\n category_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, \n category_name VARCHAR(128)\n);\n\nCREATE TABLE items\n(\n item_id int NOT NULL AUTO_INCREMENT PRIMARY KEY, \n category_id INT, \n item_name VARCHAR(128),\n FOREIGN KEY (category_id) REFERENCES categories (category_id)\n);\n
\nTo insert new items and associate them with categories
\nINSERT INTO items (category_id, item_name)\nVALUES (1, 'Hard disk');\nINSERT INTO items (category_id, item_name)\nVALUES (2, 'Java');\n
\nTo get items in category Hardware
\nSELECT item_id, item_name\n FROM items i JOIN categories c\n ON i.category_id = c.category_id\n WHERE c.category_name = 'Hardware'\n
\nor to easily get a count of items per category
\nSELECT category_name, COUNT(item_id) no_items\n FROM categories c LEFT JOIN items i\n ON c.category_id = i.category_id\n GROUP BY c.category_id, c.category_name;\n
\nHere is SQLFiddle demo
\nIf an item may belong to different categories then you'll need a many-to-many table categories_items.
\n
soup wrap:
Stop. Don't create tables for each category. Use a proper schema design from the beginning. It will pay off big time by allowing you normally maintain and query your data.
In your case the schema might look like
CREATE TABLE categories
(
category_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
category_name VARCHAR(128)
);
CREATE TABLE items
(
item_id int NOT NULL AUTO_INCREMENT PRIMARY KEY,
category_id INT,
item_name VARCHAR(128),
FOREIGN KEY (category_id) REFERENCES categories (category_id)
);
To insert new items and associate them with categories
INSERT INTO items (category_id, item_name)
VALUES (1, 'Hard disk');
INSERT INTO items (category_id, item_name)
VALUES (2, 'Java');
To get items in category Hardware
SELECT item_id, item_name
FROM items i JOIN categories c
ON i.category_id = c.category_id
WHERE c.category_name = 'Hardware'
or to easily get a count of items per category
SELECT category_name, COUNT(item_id) no_items
FROM categories c LEFT JOIN items i
ON c.category_id = i.category_id
GROUP BY c.category_id, c.category_name;
Here is SQLFiddle demo
If an item may belong to different categories then you'll need a many-to-many table categories_items.
qid & accept id:
(19939624, 19940421)
query:
Format the XML String Generated using Oracle XMLAgg
soup:
You have to use XMLSERIALIZE:
\nSELECT\n XMLSERIALIZE(DOCUMENT\n XMLElement("Sample-Test" ,\n XMLAgg(\n XMLElement("Sample",\n XMLElement("SAMPLE_NUM", s.sample_number), \n XMLElement("LABEL_ID", s.label_id),\n XMLElement("STATUS", s.status),\n (SELECT \n XMLAgg( \n XMLElement("Test-Details",\n XMLElement("TEST_NUM", t.test_number),\n XMLElement("ANALYSIS", t.analysis), \n (SELECT \n XMLAgg( \n XMLElement("Result-Details",\n XMLElement("RESULT_NUM", R.RESULT_NUMBER),\n XMLElement("RESULT_NAME", R.NAME))) \n FROM RESULT R WHERE t.test_number = R.test_number \n and t.SAMPLE_number = R.SAMPLE_NUMBER\n ))) \n FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) \n ) AS CLOB INDENT SIZE = 2) as XML \n FROM sample s \n WHERE s.sample_number = 720000020018;\n
\nEdit
\nIt is not working for you, because, most probably, you are using Oracle 10g, and the INDENT option was introduced in version 11g. If this is the case, try below approach with the EXTRACT('*'):
\nSELECT\n XMLElement("Sample-Test" ,\n XMLAgg(\n XMLElement("Sample",\n XMLElement("SAMPLE_NUM", s.sample_number), \n XMLElement("LABEL_ID", s.label_id),\n XMLElement("STATUS", s.status),\n (SELECT \n XMLAgg( \n XMLElement("Test-Details",\n XMLElement("TEST_NUM", t.test_number),\n XMLElement("ANALYSIS", t.analysis), \n (SELECT \n XMLAgg( \n XMLElement("Result-Details",\n XMLElement("RESULT_NUM", R.RESULT_NUMBER),\n XMLElement("RESULT_NAME", R.NAME))) \n FROM RESULT R WHERE t.test_number = R.test_number \n and t.SAMPLE_number = R.SAMPLE_NUMBER\n ))) \n FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) \n ).EXTRACT('*') as XML \n FROM sample s \n WHERE s.sample_number = 720000020018;\n
\n
soup wrap:
You have to use XMLSERIALIZE:
SELECT
XMLSERIALIZE(DOCUMENT
XMLElement("Sample-Test" ,
XMLAgg(
XMLElement("Sample",
XMLElement("SAMPLE_NUM", s.sample_number),
XMLElement("LABEL_ID", s.label_id),
XMLElement("STATUS", s.status),
(SELECT
XMLAgg(
XMLElement("Test-Details",
XMLElement("TEST_NUM", t.test_number),
XMLElement("ANALYSIS", t.analysis),
(SELECT
XMLAgg(
XMLElement("Result-Details",
XMLElement("RESULT_NUM", R.RESULT_NUMBER),
XMLElement("RESULT_NAME", R.NAME)))
FROM RESULT R WHERE t.test_number = R.test_number
and t.SAMPLE_number = R.SAMPLE_NUMBER
)))
FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER)))
) AS CLOB INDENT SIZE = 2) as XML
FROM sample s
WHERE s.sample_number = 720000020018;
Edit
It is not working for you, because, most probably, you are using Oracle 10g, and the INDENT option was introduced in version 11g. If this is the case, try below approach with the EXTRACT('*'):
SELECT
XMLElement("Sample-Test" ,
XMLAgg(
XMLElement("Sample",
XMLElement("SAMPLE_NUM", s.sample_number),
XMLElement("LABEL_ID", s.label_id),
XMLElement("STATUS", s.status),
(SELECT
XMLAgg(
XMLElement("Test-Details",
XMLElement("TEST_NUM", t.test_number),
XMLElement("ANALYSIS", t.analysis),
(SELECT
XMLAgg(
XMLElement("Result-Details",
XMLElement("RESULT_NUM", R.RESULT_NUMBER),
XMLElement("RESULT_NAME", R.NAME)))
FROM RESULT R WHERE t.test_number = R.test_number
and t.SAMPLE_number = R.SAMPLE_NUMBER
)))
FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER)))
).EXTRACT('*') as XML
FROM sample s
WHERE s.sample_number = 720000020018;
qid & accept id:
(19941944, 19942007)
query:
How to execute two DELETE queries one after another
soup:
You can execute queries in succession by separating them with a semicolon ;. More details are in the MySQL documentation.
\nSimply do:
\nDELETE FROM A WHERE Id IN (SELECT Id FROM B); DELETE FROM B;\n
\nBased on your requirement; this does exactly what you asked for based on the below example:
\nmysql> select sleep(5); show databases;\n+----------+\n| sleep(5) |\n+----------+\n| 0 |\n+----------+\n1 row in set (5.00 sec)\n\n+--------------------+\n| Database |\n+--------------------+\n| ... |\n+--------------------+\n9 rows in set (0.01 sec)\n
\nYou can do this with mysql -e command and virtually any mysql library (such as the one with php).
\n
soup wrap:
You can execute queries in succession by separating them with a semicolon ;. More details are in the MySQL documentation.
Simply do:
DELETE FROM A WHERE Id IN (SELECT Id FROM B); DELETE FROM B;
Based on your requirement; this does exactly what you asked for based on the below example:
mysql> select sleep(5); show databases;
+----------+
| sleep(5) |
+----------+
| 0 |
+----------+
1 row in set (5.00 sec)
+--------------------+
| Database |
+--------------------+
| ... |
+--------------------+
9 rows in set (0.01 sec)
You can do this with mysql -e command and virtually any mysql library (such as the one with php).
qid & accept id:
(19949250, 19949316)
query:
oracle date to string
soup:
You shouldn't use the FM format model, because FM, as written in the documentation:
\nFM - Used in combination with other elements to direct the suppression of leading or trailing blanks
\nSo using FM will make your final string shorter, if possible.
\nYou should remove the FM from your format model mask and it will work as you expect:
\nselect to_char(TRUNC(sysdate), 'mm/dd/yyyy hh12:mi:ss am') from dual;\n
\nOutput:
\n11/13/2013 12:00:00 am.
\nI've changed my answer after reading Nicholas Krasnov's comment (thanks).
\nMore about date format models in Oracle Documentation: Format models
\nEdit
\nYes, the code I provided would return, for example, 01-01-2013. If you want to have the month and day without leading zeroes, than you should write it like this: fmDD-MM-YYYY fmHH:MI:SS.
\nThe first fm makes the leading zeroes be truncated. The second fm turns off that feature and you do get leading zeroes for the time part of the date, example:
\nSELECT TO_CHAR(\n TO_DATE('01-01-2013 10:00:00', 'DD-MM-YYYY HH12:MI:SS'),\n 'fmmm/dd/yyyy fmhh12:mi:ss am')\nFROM dual;\n
\nOutput:
\n1/1/2013 10:00:00 am.
\n
soup wrap:
You shouldn't use the FM format model, because FM, as written in the documentation:
FM - Used in combination with other elements to direct the suppression of leading or trailing blanks
So using FM will make your final string shorter, if possible.
You should remove the FM from your format model mask and it will work as you expect:
select to_char(TRUNC(sysdate), 'mm/dd/yyyy hh12:mi:ss am') from dual;
Output:
11/13/2013 12:00:00 am.
I've changed my answer after reading Nicholas Krasnov's comment (thanks).
More about date format models in Oracle Documentation: Format models
Edit
Yes, the code I provided would return, for example, 01-01-2013. If you want to have the month and day without leading zeroes, than you should write it like this: fmDD-MM-YYYY fmHH:MI:SS.
The first fm makes the leading zeroes be truncated. The second fm turns off that feature and you do get leading zeroes for the time part of the date, example:
SELECT TO_CHAR(
TO_DATE('01-01-2013 10:00:00', 'DD-MM-YYYY HH12:MI:SS'),
'fmmm/dd/yyyy fmhh12:mi:ss am')
FROM dual;
Output:
1/1/2013 10:00:00 am.
qid & accept id:
(19968525, 19968582)
query:
How to Insert a value from column to another colum?
soup:
Okay. With the additional information from your comment, this runs on SQL 2012:
\nFirst some first aid for your data model:
\nCREATE TABLE [Orders] (\nCustomerId INT,\nProductId INT,\nQuantity INT,\nOrderDate datetime2 default GetDate(),\nEnteredBy SYSNAME default original_login() \n)\nGO\n
\nThen the transaction code would be:
\nBEGIN TRANSACTION\n\nDECLARE @Quantity INT\nDECLARE @CustomerId INT\nDECLARE @ProductId INT\n\nINSERT INTO Orders (customerId,productId,quantity) \nVALUES (@CustomerId,@ProductId,@Quantity)\n\nUPDATE Customer\nSET quantityOrder = QuantityOrder + @Quantity\nWHERE CustomerId = @CustomerId\n\nUPDATE product\nSET quantity = quantity - @Quantity\nWHERE productId = @ProductId\n\nCOMMIT TRANSACTION\n
\n
soup wrap:
Okay. With the additional information from your comment, this runs on SQL 2012:
First some first aid for your data model:
CREATE TABLE [Orders] (
CustomerId INT,
ProductId INT,
Quantity INT,
OrderDate datetime2 default GetDate(),
EnteredBy SYSNAME default original_login()
)
GO
Then the transaction code would be:
BEGIN TRANSACTION
DECLARE @Quantity INT
DECLARE @CustomerId INT
DECLARE @ProductId INT
INSERT INTO Orders (customerId,productId,quantity)
VALUES (@CustomerId,@ProductId,@Quantity)
UPDATE Customer
SET quantityOrder = QuantityOrder + @Quantity
WHERE CustomerId = @CustomerId
UPDATE product
SET quantity = quantity - @Quantity
WHERE productId = @ProductId
COMMIT TRANSACTION
qid & accept id:
(19985833, 19986040)
query:
Get mysql column values to row
soup:
You can´t do this without a PIVOT TABLE Wich in most cases has fixed numbers of columns to rows.
\nThis one has a procedure to do it automatticaly http://www.artfulsoftware.com/infotree/qrytip.php?id=523
\nBut you have a function on MySql wich will give you something to work with. You will not see the Passenger1..PassengerN you will see a result like this:
\n1 Steve, Gary, Tom\n2 John, Chris, Thomas\n
\nIf that is good enough to you this is your query:
\nselect passengers.Bookingid, group_concat(bookings.Customer)\n from bookings inner join passengers on ( bookings.Bookingid = passengers.Bookingid )\ngroup by passengers.Bookingid \n
\n
soup wrap:
You can´t do this without a PIVOT TABLE Wich in most cases has fixed numbers of columns to rows.
This one has a procedure to do it automatticaly http://www.artfulsoftware.com/infotree/qrytip.php?id=523
But you have a function on MySql wich will give you something to work with. You will not see the Passenger1..PassengerN you will see a result like this:
1 Steve, Gary, Tom
2 John, Chris, Thomas
If that is good enough to you this is your query:
select passengers.Bookingid, group_concat(bookings.Customer)
from bookings inner join passengers on ( bookings.Bookingid = passengers.Bookingid )
group by passengers.Bookingid
qid & accept id:
(19998376, 19998564)
query:
Getting Values from a table column and inserting to another table
soup:
Using a GROUP BY and CASE will do the trick:
\nCREATE TABLE extended_values (\n name VARCHAR(20),\n value VARCHAR(20),\n userkey INT\n);\n\nINSERT INTO extended_values VALUES ('cs1', 'tgb', 100);\nINSERT INTO extended_values VALUES ('cs2', 'hhy', 100);\nINSERT INTO extended_values VALUES ('cs3', 'ttr', 100);\nINSERT INTO extended_values VALUES ('cs1', 'hht', 104);\nINSERT INTO extended_values VALUES ('cs2', 'iyu', 104);\nINSERT INTO extended_values VALUES ('cs3', 'uyt', 104);\nINSERT INTO extended_values VALUES ('cs1', 'tjg', 106);\nINSERT INTO extended_values VALUES ('cs2', 'yyt', 106);\nINSERT INTO extended_values VALUES ('cs3', 'try', 106);\n\nCOMMIT;\n\nCREATE TABLE user_custom_property (\n userkey INT,\n cs1 VARCHAR(20),\n cs2 VARCHAR(20),\n cs3 VARCHAR(20)\n);\n\nINSERT INTO user_custom_property\n SELECT\n userkey,\n MIN(CASE WHEN name = 'cs1' THEN value END),\n MIN(CASE WHEN name = 'cs2' THEN value END),\n MIN(CASE WHEN name = 'cs3' THEN value END)\n FROM extended_values\n GROUP BY userkey;\n\nSELECT * FROM user_custom_property;\n
\nOutput:
\n USERKEY CS1 CS2 CS3 \n---------- -------------------- -------------------- --------------------\n 100 tgb hhy ttr \n 104 hht iyu uyt \n 106 tjg yyt try
\nCheck at SQLFiddle:
\n\nEdit
\nRegarding the question in the comment - you just have to change the values in the CASE:
\nINSERT INTO user_custom_property\n SELECT\n userkey,\n MIN(CASE WHEN name = 'ea1' THEN value END),\n MIN(CASE WHEN name = 'ea2' THEN value END),\n MIN(CASE WHEN name = 'ea3' THEN value END)\n FROM extended_values\n GROUP BY userkey;\n
\n
soup wrap:
Using a GROUP BY and CASE will do the trick:
CREATE TABLE extended_values (
name VARCHAR(20),
value VARCHAR(20),
userkey INT
);
INSERT INTO extended_values VALUES ('cs1', 'tgb', 100);
INSERT INTO extended_values VALUES ('cs2', 'hhy', 100);
INSERT INTO extended_values VALUES ('cs3', 'ttr', 100);
INSERT INTO extended_values VALUES ('cs1', 'hht', 104);
INSERT INTO extended_values VALUES ('cs2', 'iyu', 104);
INSERT INTO extended_values VALUES ('cs3', 'uyt', 104);
INSERT INTO extended_values VALUES ('cs1', 'tjg', 106);
INSERT INTO extended_values VALUES ('cs2', 'yyt', 106);
INSERT INTO extended_values VALUES ('cs3', 'try', 106);
COMMIT;
CREATE TABLE user_custom_property (
userkey INT,
cs1 VARCHAR(20),
cs2 VARCHAR(20),
cs3 VARCHAR(20)
);
INSERT INTO user_custom_property
SELECT
userkey,
MIN(CASE WHEN name = 'cs1' THEN value END),
MIN(CASE WHEN name = 'cs2' THEN value END),
MIN(CASE WHEN name = 'cs3' THEN value END)
FROM extended_values
GROUP BY userkey;
SELECT * FROM user_custom_property;
Output:
USERKEY CS1 CS2 CS3
---------- -------------------- -------------------- --------------------
100 tgb hhy ttr
104 hht iyu uyt
106 tjg yyt try
Check at SQLFiddle:
Edit
Regarding the question in the comment - you just have to change the values in the CASE:
INSERT INTO user_custom_property
SELECT
userkey,
MIN(CASE WHEN name = 'ea1' THEN value END),
MIN(CASE WHEN name = 'ea2' THEN value END),
MIN(CASE WHEN name = 'ea3' THEN value END)
FROM extended_values
GROUP BY userkey;
qid & accept id:
(20028832, 20031650)
query:
longest winning streak by query
soup:
Here's one way, but I've got a feeling you're not going to like it...
\nConsider the following data DDL's...
\nCREATE TABLE results\n(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY\n,homeTeam INT NOT NULL\n,awayTeam INT NOT NULL\n,homeScore INT NOT NULL\n,awayScore INT NOT NULL\n);\n\nINSERT INTO results VALUES\n(1,1,2,3,2),\n(2,3,4,0,1),\n(3,2,1,2,0),\n(4,4,3,1,0),\n(5,3,2,1,2),\n(6,2,3,0,2),\n(7,1,4,4,1),\n(8,4,1,1,2),\n(9,1,3,3,0),\n(10,3,1,1,0),\n(11,4,2,1,0),\n(12,2,4,1,2);\n
\nFrom here, we can obtain an intermediate result as follows...
\nSELECT x.*, COUNT(*) rank\n FROM\n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) x\n JOIN \n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) y\n ON y.team = x.team\n AND y.id <= x.id\n GROUP\n BY x.id\n , x.team\n ORDER\n BY team, rank;\n\n+----+------+--------+------+\n| id | team | result | rank |\n+----+------+--------+------+\n| 1 | 1 | w | 1 |\n| 3 | 1 | l | 2 |\n| 7 | 1 | w | 3 |\n| 8 | 1 | w | 4 |\n| 9 | 1 | w | 5 |\n| 10 | 1 | l | 6 |\n| 1 | 2 | l | 1 |\n| 3 | 2 | w | 2 |\n| 5 | 2 | w | 3 |\n| 6 | 2 | l | 4 |\n| 11 | 2 | l | 5 |\n| 12 | 2 | l | 6 |\n| 2 | 3 | l | 1 |\n| 4 | 3 | l | 2 |\n| 5 | 3 | l | 3 |\n| 6 | 3 | w | 4 |\n| 9 | 3 | l | 5 |\n| 10 | 3 | w | 6 |\n| 2 | 4 | w | 1 |\n| 4 | 4 | w | 2 |\n| 7 | 4 | l | 3 |\n| 8 | 4 | l | 4 |\n| 11 | 4 | w | 5 |\n| 12 | 4 | w | 6 |\n+----+------+--------+------+\n
\nBy inspection, we can see that team 1 has the longest winning streak (3 consecutive 'w's). You can set up a couple of @vars to track this or, if you're slightly masochistic (like me) you can do something slower, longer, and more complicated...
\nSELECT a.team\n , MIN(c.rank) - a.rank + 1 streak\n FROM (SELECT x.*, COUNT(*) rank\n FROM\n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) x\n JOIN \n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) y\n ON y.team = x.team\n AND y.id <= x.id\n GROUP\n BY x.id\n , x.team\n ) a\n LEFT \n JOIN (SELECT x.*, COUNT(*) rank\n FROM\n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) x\n JOIN \n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) y\n ON y.team = x.team\n AND y.id <= x.id\n GROUP\n BY x.id\n , x.team\n ) b \n ON b.team = a.team\n AND b.rank = a.rank - 1 \n AND b.result = a.result\n LEFT \n JOIN (SELECT x.*, COUNT(*) rank\n FROM\n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) x\n JOIN \n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) y\n ON y.team = x.team\n AND y.id <= x.id\n GROUP\n BY x.id\n , x.team\n ) c \n ON c.team = a.team\n AND c.rank >= a.rank \n AND c.result = a.result\n LEFT \n JOIN (SELECT x.*, COUNT(*) rank\n FROM\n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) x\n JOIN \n ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n UNION\n SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n ) y\n ON y.team = x.team\n AND y.id <= x.id\n GROUP\n BY x.id\n , x.team\n ) d \n ON d.team = a.team\n AND d.rank = c.rank + 1 \n AND d.result = a.result\n WHERE a.result = 'w'\n AND b.id IS NULL\n AND c.id IS NOT NULL\n AND d.id IS NULL\n GROUP \n BY a.team\n , a.rank\n ORDER \n BY streak DESC \n LIMIT 1; \n\n +------+--------+\n | team | streak |\n +------+--------+\n | 1 | 3 |\n +------+--------+\n
\nNote that this doesn't account for individual match ties (a modest change to the repeated subquery), nor if two teams have longest winning streaks of equal length (requiring a JOIN of everything here back on itself!).
\n
soup wrap:
Here's one way, but I've got a feeling you're not going to like it...
Consider the following data DDL's...
CREATE TABLE results
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,homeTeam INT NOT NULL
,awayTeam INT NOT NULL
,homeScore INT NOT NULL
,awayScore INT NOT NULL
);
INSERT INTO results VALUES
(1,1,2,3,2),
(2,3,4,0,1),
(3,2,1,2,0),
(4,4,3,1,0),
(5,3,2,1,2),
(6,2,3,0,2),
(7,1,4,4,1),
(8,4,1,1,2),
(9,1,3,3,0),
(10,3,1,1,0),
(11,4,2,1,0),
(12,2,4,1,2);
From here, we can obtain an intermediate result as follows...
SELECT x.*, COUNT(*) rank
FROM
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) x
JOIN
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) y
ON y.team = x.team
AND y.id <= x.id
GROUP
BY x.id
, x.team
ORDER
BY team, rank;
+----+------+--------+------+
| id | team | result | rank |
+----+------+--------+------+
| 1 | 1 | w | 1 |
| 3 | 1 | l | 2 |
| 7 | 1 | w | 3 |
| 8 | 1 | w | 4 |
| 9 | 1 | w | 5 |
| 10 | 1 | l | 6 |
| 1 | 2 | l | 1 |
| 3 | 2 | w | 2 |
| 5 | 2 | w | 3 |
| 6 | 2 | l | 4 |
| 11 | 2 | l | 5 |
| 12 | 2 | l | 6 |
| 2 | 3 | l | 1 |
| 4 | 3 | l | 2 |
| 5 | 3 | l | 3 |
| 6 | 3 | w | 4 |
| 9 | 3 | l | 5 |
| 10 | 3 | w | 6 |
| 2 | 4 | w | 1 |
| 4 | 4 | w | 2 |
| 7 | 4 | l | 3 |
| 8 | 4 | l | 4 |
| 11 | 4 | w | 5 |
| 12 | 4 | w | 6 |
+----+------+--------+------+
By inspection, we can see that team 1 has the longest winning streak (3 consecutive 'w's). You can set up a couple of @vars to track this or, if you're slightly masochistic (like me) you can do something slower, longer, and more complicated...
SELECT a.team
, MIN(c.rank) - a.rank + 1 streak
FROM (SELECT x.*, COUNT(*) rank
FROM
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) x
JOIN
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) y
ON y.team = x.team
AND y.id <= x.id
GROUP
BY x.id
, x.team
) a
LEFT
JOIN (SELECT x.*, COUNT(*) rank
FROM
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) x
JOIN
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) y
ON y.team = x.team
AND y.id <= x.id
GROUP
BY x.id
, x.team
) b
ON b.team = a.team
AND b.rank = a.rank - 1
AND b.result = a.result
LEFT
JOIN (SELECT x.*, COUNT(*) rank
FROM
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) x
JOIN
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) y
ON y.team = x.team
AND y.id <= x.id
GROUP
BY x.id
, x.team
) c
ON c.team = a.team
AND c.rank >= a.rank
AND c.result = a.result
LEFT
JOIN (SELECT x.*, COUNT(*) rank
FROM
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) x
JOIN
( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results
UNION
SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
) y
ON y.team = x.team
AND y.id <= x.id
GROUP
BY x.id
, x.team
) d
ON d.team = a.team
AND d.rank = c.rank + 1
AND d.result = a.result
WHERE a.result = 'w'
AND b.id IS NULL
AND c.id IS NOT NULL
AND d.id IS NULL
GROUP
BY a.team
, a.rank
ORDER
BY streak DESC
LIMIT 1;
+------+--------+
| team | streak |
+------+--------+
| 1 | 3 |
+------+--------+
Note that this doesn't account for individual match ties (a modest change to the repeated subquery), nor if two teams have longest winning streaks of equal length (requiring a JOIN of everything here back on itself!).
qid & accept id:
(20049984, 20050682)
query:
Calculate time difference between rows
soup:
This is a CTE solution but, as has been indicated, this may not always perform well - because we're having to compute functions against the DateTime column, most indexes will be useless:
\ndeclare @t table (ID int not null,[DateTime] datetime not null,\n PID int not null,TIU int not null)\ninsert into @t(ID,[DateTime],PID,TIU) values\n(1,'2013-11-18 00:15:00',1551,1005 ),\n(2,'2013-11-18 00:16:03',1551,1885 ),\n(3,'2013-11-18 00:16:30',9110,75527 ),\n(4,'2013-11-18 00:22:01',1022,75 ),\n(5,'2013-11-18 00:22:09',1019,1311 ),\n(6,'2013-11-18 00:23:52',1022,89 ),\n(7,'2013-11-18 00:24:19',1300,44433 ),\n(8,'2013-11-18 00:38:57',9445,2010 )\n\n;With Islands as (\n select ID as MinID,[DateTime],ID as RecID from @t t1\n where not exists\n (select * from @t t2\n where t2.ID < t1.ID and --Or by date, if needed\n --Use 300 seconds to avoid most transition issues\n DATEDIFF(second,t2.[DateTime],t1.[DateTime]) < 300\n )\n union all\n select i.MinID,t2.[DateTime],t2.ID\n from Islands i\n inner join\n @t t2\n on\n i.RecID < t2.ID and\n DATEDIFF(second,i.[DateTime],t2.[DateTime]) < 300\n), Ends as (\n select MinID,MAX(RecID) as MaxID from Islands group by MinID\n)\nselect * from @t t\nwhere exists(select * from Ends e where e.MinID = t.ID or e.MaxID = t.ID)\n
\nThis also returns a row for ID 1, since that row has no preceding row within 5 minutes of it - but that should be easy enough to exclude in the final select, if needed.
\nI've assumed we can use ID as a proxy for increasing dates - that if for two rows, the ID is higher in the second row, then the DateTime will also be later.
\n
\nIslands is a recursive CTE. The top half (the anchor) just selects rows which do not have any preceding row within 5 minutes of themselves. We select the ID twice for those rows and also keep the DateTime around.
\nIn the recursive portion, we try to find a new row from the table that can be "added on" to an existing Islands row - based on this new row being no more than 5 minutes later than the current end-point of the island.
\nOnce the recursion is complete, we then exclude the intermediate rows that the CTE produces. E.g. for the "4" island, it generated the following rows:
\n4,00:22:01,4\n4,00:22:09,5\n4,00:23:52,6\n4,00:24:19,7\n
\nAnd all that we care about is that final row where we've identified an "island" of time from ID 4 to ID 7 - that's what the second CTE (Ends) is finding for us.
\n
soup wrap:
This is a CTE solution but, as has been indicated, this may not always perform well - because we're having to compute functions against the DateTime column, most indexes will be useless:
declare @t table (ID int not null,[DateTime] datetime not null,
PID int not null,TIU int not null)
insert into @t(ID,[DateTime],PID,TIU) values
(1,'2013-11-18 00:15:00',1551,1005 ),
(2,'2013-11-18 00:16:03',1551,1885 ),
(3,'2013-11-18 00:16:30',9110,75527 ),
(4,'2013-11-18 00:22:01',1022,75 ),
(5,'2013-11-18 00:22:09',1019,1311 ),
(6,'2013-11-18 00:23:52',1022,89 ),
(7,'2013-11-18 00:24:19',1300,44433 ),
(8,'2013-11-18 00:38:57',9445,2010 )
;With Islands as (
select ID as MinID,[DateTime],ID as RecID from @t t1
where not exists
(select * from @t t2
where t2.ID < t1.ID and --Or by date, if needed
--Use 300 seconds to avoid most transition issues
DATEDIFF(second,t2.[DateTime],t1.[DateTime]) < 300
)
union all
select i.MinID,t2.[DateTime],t2.ID
from Islands i
inner join
@t t2
on
i.RecID < t2.ID and
DATEDIFF(second,i.[DateTime],t2.[DateTime]) < 300
), Ends as (
select MinID,MAX(RecID) as MaxID from Islands group by MinID
)
select * from @t t
where exists(select * from Ends e where e.MinID = t.ID or e.MaxID = t.ID)
This also returns a row for ID 1, since that row has no preceding row within 5 minutes of it - but that should be easy enough to exclude in the final select, if needed.
I've assumed we can use ID as a proxy for increasing dates - that if for two rows, the ID is higher in the second row, then the DateTime will also be later.
Islands is a recursive CTE. The top half (the anchor) just selects rows which do not have any preceding row within 5 minutes of themselves. We select the ID twice for those rows and also keep the DateTime around.
In the recursive portion, we try to find a new row from the table that can be "added on" to an existing Islands row - based on this new row being no more than 5 minutes later than the current end-point of the island.
Once the recursion is complete, we then exclude the intermediate rows that the CTE produces. E.g. for the "4" island, it generated the following rows:
4,00:22:01,4
4,00:22:09,5
4,00:23:52,6
4,00:24:19,7
And all that we care about is that final row where we've identified an "island" of time from ID 4 to ID 7 - that's what the second CTE (Ends) is finding for us.
qid & accept id:
(20062208, 20062265)
query:
Simple SQL Table Insert
soup:
If you have an existing table you can do:
\nINSERT INTO ExistingTable (Columns,..)\nSELECT Columns,...\nFROM OtherTable\n
\nFrom your sql
\ninsert into newEmpTable (employee_id, first_name, \n last_name, email, phone_number, hire_date, \n job_id, salary, commission_pct, manager_id, department_id)\nselect e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id\nfrom employees e\njoin departments d\non e.department_id = d.department_id\njoin jobs j\non e.job_id = j.job_id\njoin locations l\non d.location_id = l.location_id\nwhere l.city = 'Seattle';\n
\nSee http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html
\nIf you do not have a table and want to create it,
\ncreate table new_table as \nselect e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id\nfrom employees e\njoin departments d\non e.department_id = d.department_id\njoin jobs j\non e.job_id = j.job_id\njoin locations l\non d.location_id = l.location_id\nwhere l.city = 'Seattle';\n
\n
soup wrap:
If you have an existing table you can do:
INSERT INTO ExistingTable (Columns,..)
SELECT Columns,...
FROM OtherTable
From your sql
insert into newEmpTable (employee_id, first_name,
last_name, email, phone_number, hire_date,
job_id, salary, commission_pct, manager_id, department_id)
select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id
from employees e
join departments d
on e.department_id = d.department_id
join jobs j
on e.job_id = j.job_id
join locations l
on d.location_id = l.location_id
where l.city = 'Seattle';
See http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html
If you do not have a table and want to create it,
create table new_table as
select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id
from employees e
join departments d
on e.department_id = d.department_id
join jobs j
on e.job_id = j.job_id
join locations l
on d.location_id = l.location_id
where l.city = 'Seattle';
qid & accept id:
(20072250, 20074369)
query:
Find sum across varying no of columns
soup:
Intro
\nThe normal way to resolve this question is: chose correct structure. If you have 24 fields and you need to loop dynamically in SQL, then something went wrong. Also, it is bad that your table has not any primary key (or you've not mentioned that).
\nExtremely important note
\nIt is no matter that the way I'll describe will work. It is still bad practice because of using some special things in MySQL. You can use it on your own risk - and, again, reconsider your structure if it's possible.
\nThe hack
\nActually, you can do some tricks using MySQL INFORMATION_SCHEMA tables. With this you can create "text" SQL, which later can be used in prepared statement.
\nMy table
\nIt's called test. Here it is:
\n\n+----------+---------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+----------+---------+------+-----+---------+-------+\n| value1 | int(11) | YES | | NULL | |\n| value2 | int(11) | YES | | NULL | |\n| value3 | int(11) | YES | | NULL | |\n| value4 | int(11) | YES | | NULL | |\n| constant | int(11) | YES | | NULL | |\n+----------+---------+------+-----+---------+-------+\n
\n-I have 4 "value" fields in it and no primary key column (that causes troubles, but I've resolved that). Now, my data:
\n\n+--------+--------+--------+--------+----------+\n| value1 | value2 | value3 | value4 | constant |\n+--------+--------+--------+--------+----------+\n| 2 | 5 | 6 | 0 | 2 |\n| 1 | -100 | 0 | 0 | 1 |\n| 3 | 10 | -10 | 0 | 3 |\n| 4 | 0 | -1 | 5 | 3 |\n| -1 | 1 | -1 | 1 | 4 |\n+--------+--------+--------+--------+----------+\n
\nThe trick
\nIt's about selecting data from mentioned service schema in MySQL and working with GROUP_CONCAT function:
\nselect \n concat('SELECT CASE(seq) ', \n group_concat(groupcase separator ''), \n ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') \nfrom \n (select \n concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase \n from \n (select \n rownum, \n group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue \n from \n (select \n *, \n @row:=@row+1 as rownum \n from test \n cross join (select @row:=0) as initrow) as tablestruct \n left join \n (select \n COLUMN_NAME, \n @num:=@num+1 as num \n from \n INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init \n where \n TABLE_SCHEMA='test' && \n TABLE_NAME='test' && \n COLUMN_NAME!='constant') as struct \n on tablestruct.constant>=struct.num \n group by \n rownum) as groupvalues) as groupscase\n
\n-what will this do? Actually, I recommend to execute it step-by-step (i.e. add more complex layer to that which you've already understood) - I doubt there's short way to describe what's happening. It's not a wizardry, it's about constructing valid text SQL from input conditions. End result will be like:
\nSELECT CASE(seq) WHEN 1 THEN value1+value2 WHEN 2 THEN value1 WHEN 3 THEN value3+value2+value1 WHEN 4 THEN value3+value2+value1 WHEN 5 THEN value2+value1+value4+value3 END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest\n
\n(I didn't add formatting because that SQL is generated string, not the one you'll write by yourself).
\nLast step
\nWhat now? Just Allocate it with:
\n\nmysql> set @s=(select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' and COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase);\nQuery OK, 0 rows affected (0.00 sec)\n\nmysql> prepare stmt from @s;\nQuery OK, 0 rows affected (0.00 sec)\nStatement prepared\n\n-and, finally:
\n\nmysql> execute stmt;\n
\nYou'll get results as:
\n\n+--------+\n| result |\n+--------+\n| 7 |\n| 1 |\n| 3 |\n| 3 |\n| 0 |\n+--------+\n
\nWhy is this bad
\nBecause it generates string for whole table. I.e. for each row! Imagine if you'll have 1000 rows - that will be nasty. MySQL also has limitation in GROUP_CONCAT: group_concat_max_len - which will limit this way, obviously.
\nSo why I did that?
\nBecause I was curious if the way without additional DDL and implicit recounting of table's fields exist. I've found it, so leaving it here.
\n
soup wrap:
Intro
The normal way to resolve this question is: chose correct structure. If you have 24 fields and you need to loop dynamically in SQL, then something went wrong. Also, it is bad that your table has not any primary key (or you've not mentioned that).
Extremely important note
It is no matter that the way I'll describe will work. It is still bad practice because of using some special things in MySQL. You can use it on your own risk - and, again, reconsider your structure if it's possible.
The hack
Actually, you can do some tricks using MySQL INFORMATION_SCHEMA tables. With this you can create "text" SQL, which later can be used in prepared statement.
My table
It's called test. Here it is:
+----------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+---------+------+-----+---------+-------+
| value1 | int(11) | YES | | NULL | |
| value2 | int(11) | YES | | NULL | |
| value3 | int(11) | YES | | NULL | |
| value4 | int(11) | YES | | NULL | |
| constant | int(11) | YES | | NULL | |
+----------+---------+------+-----+---------+-------+
-I have 4 "value" fields in it and no primary key column (that causes troubles, but I've resolved that). Now, my data:
+--------+--------+--------+--------+----------+
| value1 | value2 | value3 | value4 | constant |
+--------+--------+--------+--------+----------+
| 2 | 5 | 6 | 0 | 2 |
| 1 | -100 | 0 | 0 | 1 |
| 3 | 10 | -10 | 0 | 3 |
| 4 | 0 | -1 | 5 | 3 |
| -1 | 1 | -1 | 1 | 4 |
+--------+--------+--------+--------+----------+
The trick
It's about selecting data from mentioned service schema in MySQL and working with GROUP_CONCAT function:
select
concat('SELECT CASE(seq) ',
group_concat(groupcase separator ''),
' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest')
from
(select
concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase
from
(select
rownum,
group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue
from
(select
*,
@row:=@row+1 as rownum
from test
cross join (select @row:=0) as initrow) as tablestruct
left join
(select
COLUMN_NAME,
@num:=@num+1 as num
from
INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init
where
TABLE_SCHEMA='test' &&
TABLE_NAME='test' &&
COLUMN_NAME!='constant') as struct
on tablestruct.constant>=struct.num
group by
rownum) as groupvalues) as groupscase
-what will this do? Actually, I recommend to execute it step-by-step (i.e. add more complex layer to that which you've already understood) - I doubt there's short way to describe what's happening. It's not a wizardry, it's about constructing valid text SQL from input conditions. End result will be like:
SELECT CASE(seq) WHEN 1 THEN value1+value2 WHEN 2 THEN value1 WHEN 3 THEN value3+value2+value1 WHEN 4 THEN value3+value2+value1 WHEN 5 THEN value2+value1+value4+value3 END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest
(I didn't add formatting because that SQL is generated string, not the one you'll write by yourself).
Last step
What now? Just Allocate it with:
mysql> set @s=(select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' and COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase);
Query OK, 0 rows affected (0.00 sec)
mysql> prepare stmt from @s;
Query OK, 0 rows affected (0.00 sec)
Statement prepared
-and, finally:
mysql> execute stmt;
You'll get results as:
+--------+
| result |
+--------+
| 7 |
| 1 |
| 3 |
| 3 |
| 0 |
+--------+
Why is this bad
Because it generates string for whole table. I.e. for each row! Imagine if you'll have 1000 rows - that will be nasty. MySQL also has limitation in GROUP_CONCAT: group_concat_max_len - which will limit this way, obviously.
So why I did that?
Because I was curious if the way without additional DDL and implicit recounting of table's fields exist. I've found it, so leaving it here.
qid & accept id:
(20096624, 20096727)
query:
sum with sql and direct condition
soup:
you cannot use derived column in where clause, there're many discussions on SO about this. One way to do this is to use subquery or CTE
\nselect val\nfrom (select 1+3 as val) as v\nwhere val > 2\n
\nor
\nwith cte (\n select 1+3 as val\n)\nselect val\nfrom cte\nwhere val > 2\n
\n
soup wrap:
you cannot use derived column in where clause, there're many discussions on SO about this. One way to do this is to use subquery or CTE
select val
from (select 1+3 as val) as v
where val > 2
or
with cte (
select 1+3 as val
)
select val
from cte
where val > 2
qid & accept id:
(20146719, 20146869)
query:
Copy row from table 1 to table 2
soup:
The layout of the two tables are the same you just do:
\nINSERT INTO table2\nSELECT * FROM table1;\n
\nOr we can copy only the columns we want to into another, existing table:
\nINSERT INTO table2\n(column_name(s))\nSELECT column_name(s)\nFROM table1;\n
\n
soup wrap:
The layout of the two tables are the same you just do:
INSERT INTO table2
SELECT * FROM table1;
Or we can copy only the columns we want to into another, existing table:
INSERT INTO table2
(column_name(s))
SELECT column_name(s)
FROM table1;
qid & accept id:
(20147303, 20147780)
query:
Sybase STR-function in Oracle
soup:
select to_char(123.56, '99999999999999999999.00000000000')\nfrom dual;\n
\nor, more generically (substitute 30 and 10 respectively as required):
\nselect to_char(123.56, lpad(rpad('.',10,'0'),30,'9'))\nfrom dual;\n
\nNote: the string length will be 31 to allow room for the possible "-" (negative) sign.
\n
soup wrap:
select to_char(123.56, '99999999999999999999.00000000000')
from dual;
or, more generically (substitute 30 and 10 respectively as required):
select to_char(123.56, lpad(rpad('.',10,'0'),30,'9'))
from dual;
Note: the string length will be 31 to allow room for the possible "-" (negative) sign.
qid & accept id:
(20197566, 20198642)
query:
UDF Field Reference In SQL Server UPDATE statement
soup:
To actually answer your question, it is always the values before the update that are used, so with this table:
\nA | B\n----+-----\n1 | 2\n3 | 4\n
\nRunning:
\nUPDATE T\nSET A = B,\n B = 1;\n
\nWill give:
\nA | B\n----+-----\n2 | 1\n4 | 3 \n
\nIt does not run in the order of the statements within the update.
\nHowever, if it is not too late you should seriously consider redesigning your tables, storing delimited values in a text column is a terrible idea.
\nYou would be much better off storing your data in a normalised form, so you would have a table structure like:
\nPermissionCode
\nPermissionCode\n-------\nA \nB \nC\nD\nZ\n
\nUserPermission
\nUserID | PermissionCode\n--------+--------------------\n1 | A\n1 | B\n1 | C\n1 | D\n
\nYou can then use another other table to manage linked Permissions:
\nParentCode | ChildCode\n------------+---------------\n A | C\n A | G\n
\nYou can then get all permissions by a user using this table, e.g. by creating a view:
\nCREATE VIEW dbo.AllUserPermission\nAS\nSELECT p.UserID, p.PermissionCode\nFROM UserPermission p\nUNION \nSELECT p.UserID, lp.ChildCode\nFROM UserPermission p\n INNER JOIN LinkedPermission lp\n ON lp.ParentCode = p.PermissionCode;\n
\nThen you can get permissions that a user does not have using something like this:
\nSELECT u.UserID, P.PermissionCode\nFROM UserTable u\n CROSS JOIN PermissionCode p\nWHERE NOT EXISTS\n ( SELECT 1\n FROM AllUserPermission up\n WHERE up.UserID = u.UserID\n AND up.PermissionCode = p.PermissionCode\n );\n
\nThis way when you add new permissions you don't need to upate a column for all the users for DoNotPromoteCode, this is calculated on the fly by removing permissions the user has from a list of all permissions.
\nIf you specifically need to store codes that people have expcitly opted out of in addition to those they are not receiving then you could add a column to the UserPermission table to store this, you can also store dates and times so you know when various actions were taken:
\nUserID | PermissionCode | AddedDateTime | DoNotPromoteDateTime | RemovedDateTime\n--------+-------------------+-------------------+---------------------------+--------------------\n1 | A | 2013-11-25 16:55 | NULL | NULL\n1 | B | 2013-11-25 16:55 | 2013-11-25 16:55 | NULL\n1 | C | 2013-11-25 16:55 | 2013-11-25 16:56 | 2013-11-25 16:57\n1 | D | 2013-11-25 16:55 | NULL | 2013-11-25 16:57\n
\nBy querying on whether certain columns are NULL or not you can determine various states.
\nThis is a much more manageable way of dealing with a one to many relationship, pipe delimited strings will cause no end of problems, if you need to show the permission codes as a delimited string for any reason this can be achieved using SQL Servers XML extensions
\n
soup wrap:
To actually answer your question, it is always the values before the update that are used, so with this table:
A | B
----+-----
1 | 2
3 | 4
Running:
UPDATE T
SET A = B,
B = 1;
Will give:
A | B
----+-----
2 | 1
4 | 3
It does not run in the order of the statements within the update.
However, if it is not too late you should seriously consider redesigning your tables, storing delimited values in a text column is a terrible idea.
You would be much better off storing your data in a normalised form, so you would have a table structure like:
PermissionCode
PermissionCode
-------
A
B
C
D
Z
UserPermission
UserID | PermissionCode
--------+--------------------
1 | A
1 | B
1 | C
1 | D
You can then use another other table to manage linked Permissions:
ParentCode | ChildCode
------------+---------------
A | C
A | G
You can then get all permissions by a user using this table, e.g. by creating a view:
CREATE VIEW dbo.AllUserPermission
AS
SELECT p.UserID, p.PermissionCode
FROM UserPermission p
UNION
SELECT p.UserID, lp.ChildCode
FROM UserPermission p
INNER JOIN LinkedPermission lp
ON lp.ParentCode = p.PermissionCode;
Then you can get permissions that a user does not have using something like this:
SELECT u.UserID, P.PermissionCode
FROM UserTable u
CROSS JOIN PermissionCode p
WHERE NOT EXISTS
( SELECT 1
FROM AllUserPermission up
WHERE up.UserID = u.UserID
AND up.PermissionCode = p.PermissionCode
);
This way when you add new permissions you don't need to upate a column for all the users for DoNotPromoteCode, this is calculated on the fly by removing permissions the user has from a list of all permissions.
If you specifically need to store codes that people have expcitly opted out of in addition to those they are not receiving then you could add a column to the UserPermission table to store this, you can also store dates and times so you know when various actions were taken:
UserID | PermissionCode | AddedDateTime | DoNotPromoteDateTime | RemovedDateTime
--------+-------------------+-------------------+---------------------------+--------------------
1 | A | 2013-11-25 16:55 | NULL | NULL
1 | B | 2013-11-25 16:55 | 2013-11-25 16:55 | NULL
1 | C | 2013-11-25 16:55 | 2013-11-25 16:56 | 2013-11-25 16:57
1 | D | 2013-11-25 16:55 | NULL | 2013-11-25 16:57
By querying on whether certain columns are NULL or not you can determine various states.
This is a much more manageable way of dealing with a one to many relationship, pipe delimited strings will cause no end of problems, if you need to show the permission codes as a delimited string for any reason this can be achieved using SQL Servers XML extensions
qid & accept id:
(20250357, 20251202)
query:
Parsing string values in Access
soup:
Put the following functions into a Module:
\n Function CountCSWords (ByVal S) As Integer\n ' Counts the words in a string that are separated by commas.\n\n Dim WC As Integer, Pos As Integer\n If VarType(S) <> 8 Or Len(S) = 0 Then\n CountCSWords = 0\n Exit Function\n End If\n WC = 1\n Pos = InStr(S, ",")\n Do While Pos > 0\n WC = WC + 1\n Pos = InStr(Pos + 1, S, ",")\n Loop\n CountCSWords = WC\n End Function\n\n Function GetCSWord (ByVal S, Indx As Integer)\n ' Returns the nth word in a specific field.\n\n Dim WC As Integer, Count As Integer, SPos As Integer, EPos As Integer\n WC = CountCSWords(S)\n If Indx < 1 Or Indx > WC Then\n GetCSWord = Null\n Exit Function\n End If\n Count = 1\n SPos = 1\n For Count = 2 To Indx\n SPos = InStr(SPos, S, ",") + 1\n Next Count\n EPos = InStr(SPos, S, ",") - 1\n If EPos <= 0 Then EPos = Len(S)\n GetCSWord = Trim(Mid(S, SPos, EPos - SPos + 1))\n End Function\n
\nThen, put a field in your query like this:
\nMyFirstField: GetCSWord([FieldForms],1)\n
\nPut another one in like this:
\nMySecondField: GetCSWord([FieldForms],2)\n
\nEtc... for as many as you need.
\n
soup wrap:
Put the following functions into a Module:
Function CountCSWords (ByVal S) As Integer
' Counts the words in a string that are separated by commas.
Dim WC As Integer, Pos As Integer
If VarType(S) <> 8 Or Len(S) = 0 Then
CountCSWords = 0
Exit Function
End If
WC = 1
Pos = InStr(S, ",")
Do While Pos > 0
WC = WC + 1
Pos = InStr(Pos + 1, S, ",")
Loop
CountCSWords = WC
End Function
Function GetCSWord (ByVal S, Indx As Integer)
' Returns the nth word in a specific field.
Dim WC As Integer, Count As Integer, SPos As Integer, EPos As Integer
WC = CountCSWords(S)
If Indx < 1 Or Indx > WC Then
GetCSWord = Null
Exit Function
End If
Count = 1
SPos = 1
For Count = 2 To Indx
SPos = InStr(SPos, S, ",") + 1
Next Count
EPos = InStr(SPos, S, ",") - 1
If EPos <= 0 Then EPos = Len(S)
GetCSWord = Trim(Mid(S, SPos, EPos - SPos + 1))
End Function
Then, put a field in your query like this:
MyFirstField: GetCSWord([FieldForms],1)
Put another one in like this:
MySecondField: GetCSWord([FieldForms],2)
Etc... for as many as you need.
qid & accept id:
(20299075, 20300834)
query:
Convert Multiple Rows into Multiple Columns
soup:
This:-
\ncreate table #source (\n aid int,\n qid int,\n answer char(2),\n istrue bit\n)\ninsert into #source values\n (1,11,'a1',1),\n (2,11,'a2',0),\n (3,11,'a3',0),\n (4,11,'a4',0),\n (1,12,'a5',0),\n (2,12,'a6',0),\n (3,12,'a7',1),\n (4,12,'a8',0)\n\nselect s.qid,\n q1.aid as aid1, q1.answer as answer1, q1.istrue as istrue1,\n q2.aid as aid2, q2.answer as answer2, q2.istrue as istrue2,\n q3.aid as aid3, q3.answer as answer3, q3.istrue as istrue3,\n q4.aid as aid4, q4.answer as answer4, q4.istrue as istrue4\nfrom (\n select distinct qid\n from #source\n) s\njoin #source q1 on q1.qid=s.qid and q1.aid=1\njoin #source q2 on q2.qid=s.qid and q2.aid=2\njoin #source q3 on q3.qid=s.qid and q3.aid=3\njoin #source q4 on q4.qid=s.qid and q4.aid=4\norder by s.qid\n
\nproduces:-
\nqid aid1 answer1 istrue1 aid2 answer2 istrue2 aid3 answer3 istrue3 aid4 answer4 istrue4\n11 1 a1 1 2 a2 0 3 a3 0 4 a4 0\n12 1 a5 0 2 a6 0 3 a7 1 4 a8 0\n
\n
soup wrap:
This:-
create table #source (
aid int,
qid int,
answer char(2),
istrue bit
)
insert into #source values
(1,11,'a1',1),
(2,11,'a2',0),
(3,11,'a3',0),
(4,11,'a4',0),
(1,12,'a5',0),
(2,12,'a6',0),
(3,12,'a7',1),
(4,12,'a8',0)
select s.qid,
q1.aid as aid1, q1.answer as answer1, q1.istrue as istrue1,
q2.aid as aid2, q2.answer as answer2, q2.istrue as istrue2,
q3.aid as aid3, q3.answer as answer3, q3.istrue as istrue3,
q4.aid as aid4, q4.answer as answer4, q4.istrue as istrue4
from (
select distinct qid
from #source
) s
join #source q1 on q1.qid=s.qid and q1.aid=1
join #source q2 on q2.qid=s.qid and q2.aid=2
join #source q3 on q3.qid=s.qid and q3.aid=3
join #source q4 on q4.qid=s.qid and q4.aid=4
order by s.qid
produces:-
qid aid1 answer1 istrue1 aid2 answer2 istrue2 aid3 answer3 istrue3 aid4 answer4 istrue4
11 1 a1 1 2 a2 0 3 a3 0 4 a4 0
12 1 a5 0 2 a6 0 3 a7 1 4 a8 0
qid & accept id:
(20304400, 20304791)
query:
Use send_dbmail to send an email for each row in sql table
soup:
1) You could use a LOCAL FAST_FORWARD cursor to read every row and then to execute sp_send_dbmail
\nor
\n2) You could dynamically generate a sql statement that includes the list of EXEC sp_send_dbmail statements like this:
\nDECLARE @SqlStatement NVARCHAR(MAX) = N'\n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', ...; \n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', ...; \n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', ...;\n ...';\nEXEC(@SqlStatement);\n
\nor
\nDECLARE @bodyText NVARCHAR(MAX);\nSET @bodyText = ...;\n\nDECLARE @SqlStatement NVARCHAR(MAX) = N'\n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', @body = @pBody, ...; \n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', @body = @pBody, ...; \n EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', @body = @pBody, ...; \n ...';\nEXEC sp_executesql @SqlStatement, N'@pBody NVARCHAR(MAX)', @pBody = @bodyText;\n
\n
soup wrap:
1) You could use a LOCAL FAST_FORWARD cursor to read every row and then to execute sp_send_dbmail
or
2) You could dynamically generate a sql statement that includes the list of EXEC sp_send_dbmail statements like this:
DECLARE @SqlStatement NVARCHAR(MAX) = N'
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', ...;
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', ...;
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', ...;
...';
EXEC(@SqlStatement);
or
DECLARE @bodyText NVARCHAR(MAX);
SET @bodyText = ...;
DECLARE @SqlStatement NVARCHAR(MAX) = N'
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', @body = @pBody, ...;
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', @body = @pBody, ...;
EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', @body = @pBody, ...;
...';
EXEC sp_executesql @SqlStatement, N'@pBody NVARCHAR(MAX)', @pBody = @bodyText;
qid & accept id:
(20358637, 20359071)
query:
Retaining data while modifying column data type
soup:
The upper limit of the varray2 data type can be increased with alter type statement:
\ncreate or replace type Varray2 is varray(50) of varchar2(20);\n/\nTYPE VARRAY2 compiled\n\n\ncreate table owner (\n modified date, \n id1 Varchar2(18), -- use varchar2 data type, not varchar. \n state Varchar2(2), \n contributer_ids Varray2\n)\n/\n\ntable OWNER created.\n
\nCurrent information about varray2 data type:
\nSQL> clear screen;\nSQL> column type_name format a11;\nSQL> column upper_bound format a11\n\nSQL> select t.type_name\n 2 , t.upper_bound\n 3 from all_coll_types t\n 4 where type_name = 'VARRAY2';\n\nTYPE_NAME UPPER_BOUND\n----------- -----------\nVARRAY2 50 \n
\nChange the upper limit of the varray2 data type:
\nSQL> alter type Varray2 modify limit 150 cascade;\n\ntype VARRAY2 altered.\n
\nAfter the upper limit of the varray2 data type has changed:
\nSQL> clear screen;\nSQL> column type_name format a11;\nSQL> column upper_bound format a11\n\nSQL> select t.type_name\n 2 , t.upper_bound\n 3 from all_coll_types t\n 4 where type_name = 'VARRAY2';\n\nTYPE_NAME UPPER_BOUND\n----------- -----------\nVARRAY2 150 \n
\ncascade clause of the alter type statement propagates the data type change to the dependent objects, whether it's a table or another data type.
\n
soup wrap:
The upper limit of the varray2 data type can be increased with alter type statement:
create or replace type Varray2 is varray(50) of varchar2(20);
/
TYPE VARRAY2 compiled
create table owner (
modified date,
id1 Varchar2(18), -- use varchar2 data type, not varchar.
state Varchar2(2),
contributer_ids Varray2
)
/
table OWNER created.
Current information about varray2 data type:
SQL> clear screen;
SQL> column type_name format a11;
SQL> column upper_bound format a11
SQL> select t.type_name
2 , t.upper_bound
3 from all_coll_types t
4 where type_name = 'VARRAY2';
TYPE_NAME UPPER_BOUND
----------- -----------
VARRAY2 50
Change the upper limit of the varray2 data type:
SQL> alter type Varray2 modify limit 150 cascade;
type VARRAY2 altered.
After the upper limit of the varray2 data type has changed:
SQL> clear screen;
SQL> column type_name format a11;
SQL> column upper_bound format a11
SQL> select t.type_name
2 , t.upper_bound
3 from all_coll_types t
4 where type_name = 'VARRAY2';
TYPE_NAME UPPER_BOUND
----------- -----------
VARRAY2 150
cascade clause of the alter type statement propagates the data type change to the dependent objects, whether it's a table or another data type.
qid & accept id:
(20371389, 20372508)
query:
update column to remove html tags
soup:
UDF stands for "user defined function" - unless you did not define the the function with the name "udf_StripHTML" this simply won't work. I think you refer to this function:
\nCREATE FUNCTION [dbo].[udf_StripHTML]\n(@HTMLText VARCHAR(MAX))\nRETURNS VARCHAR(MAX)\nAS\nBEGIN\nDECLARE @Start INT\nDECLARE @End INT\nDECLARE @Length INT\nSET @Start = CHARINDEX('<',@HTMLText)\nSET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))\nSET @Length = (@End - @Start) + 1\nWHILE @Start > 0\nAND @End > 0\nAND @Length > 0\nBEGIN\nSET @HTMLText = STUFF(@HTMLText,@Start,@Length,'')\nSET @Start = CHARINDEX('<',@HTMLText)\nSET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))\nSET @Length = (@End - @Start) + 1\nEND\nRETURN LTRIM(RTRIM(@HTMLText))\nEND\nGO\n
\nto test this function do:
\nSELECT dbo.udf_StripHTML('UDF at stackoverflow.com
Stackoverflow.com')\n
\nResult Set:
\nUDF at stackoverflow.com Stackoverflow.com
\nThis function was set up by Pinal Dave - see here.
\nHope this helps.
\n
soup wrap:
UDF stands for "user defined function" - unless you did not define the the function with the name "udf_StripHTML" this simply won't work. I think you refer to this function:
CREATE FUNCTION [dbo].[udf_StripHTML]
(@HTMLText VARCHAR(MAX))
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @Start INT
DECLARE @End INT
DECLARE @Length INT
SET @Start = CHARINDEX('<',@HTMLText)
SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))
SET @Length = (@End - @Start) + 1
WHILE @Start > 0
AND @End > 0
AND @Length > 0
BEGIN
SET @HTMLText = STUFF(@HTMLText,@Start,@Length,'')
SET @Start = CHARINDEX('<',@HTMLText)
SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))
SET @Length = (@End - @Start) + 1
END
RETURN LTRIM(RTRIM(@HTMLText))
END
GO
to test this function do:
SELECT dbo.udf_StripHTML('UDF at stackoverflow.com
Stackoverflow.com')
Result Set:
UDF at stackoverflow.com Stackoverflow.com
This function was set up by Pinal Dave - see here.
Hope this helps.
qid & accept id:
(20401247, 20401409)
query:
Getting the number of rows on MySQL with SQL and PHP
soup:
I advice you to change peopleTable table to follow structure
\npeopleTable\nperson fruit_id\njohn 1\n...\n
\nAnd by the question you need follow sql
\nSELECT a.id, COUNT(*) as count FROM fruitsTable a\nLEFT JOIN peopleTable b ON a.id = b.fruit_id\nGROUP BY a.id\n
\nThis will output follows (Example data)
\nid count\n1 2\n2 4\n... \n
\nAnd update query
\nUPDATE fruitTable a SET numberOfPeople = (\n SELECT COUNT(*) FROM peopleTable b WHERE a.id = b.fruit_id GROUP BY b.fruit_id\n);\n
\n
soup wrap:
I advice you to change peopleTable table to follow structure
peopleTable
person fruit_id
john 1
...
And by the question you need follow sql
SELECT a.id, COUNT(*) as count FROM fruitsTable a
LEFT JOIN peopleTable b ON a.id = b.fruit_id
GROUP BY a.id
This will output follows (Example data)
id count
1 2
2 4
...
And update query
UPDATE fruitTable a SET numberOfPeople = (
SELECT COUNT(*) FROM peopleTable b WHERE a.id = b.fruit_id GROUP BY b.fruit_id
);
qid & accept id:
(20435406, 20435941)
query:
How to compute the sum of a variable in R considering ID variable and Index variable and save results in a matrix
soup:
You can do it in three steps, assuming tout is your data frame:
\n> library(data.table)\n> tout <- as.data.table(tout)\n> setkey(tout, ProductID)\n> cart <- tout[tout, allow.cartesian = TRUE]\n ProductID Id Price Index Id.1 Price.1 Index.1\n 1: 1 1 1 1 1 1 1\n 2: 1 10 1 2 1 1 1\n 3: 1 21 1 3 1 1 1\n 4: 1 34 1 4 1 1 1\n 5: 1 1 1 1 10 1 2\n --- \n168: 14 46 11 4 33 11 3\n169: 14 33 11 3 46 11 4\n170: 14 46 11 4 46 11 4\n171: 15 47 12 4 47 12 4\n172: 16 48 12 4 48 12 4\n
\nNow cart is a cartesian product of tout by itself, using ProductID as the key.
\n> x <- cart[, sum(Price), by = list(Index, Index.1)]\n Index Index.1 V1\n 1: 1 1 45\n 2: 2 1 45\n 3: 3 1 45\n 4: 4 1 45\n 5: 1 2 45\n 6: 2 2 66\n 7: 3 2 66\n 8: 4 2 66\n 9: 1 3 45\n10: 2 3 66\n11: 3 3 88\n12: 4 3 88\n13: 1 4 45\n14: 2 4 66\n15: 3 4 88\n16: 4 4 112\n
\nx is almost what you need, but in a data table (long) form. You need to cast to matrix (wide) form with the help of avast from reshape2 package:
\n> library(reshape2)\n> a <- acast(x, Index ~ Index.1, value.var = "V1")\n 1 2 3 4\n1 45 45 45 45\n2 45 66 66 66\n3 45 66 88 88\n4 45 66 88 112\n
\nFinally, to set upper triangular part of the matrix to NA:
\n> a[upper.tri(a)] <- NA\n 1 2 3 4\n1 45 NA NA NA\n2 45 66 NA NA\n3 45 66 88 NA\n4 45 66 88 112\n
\n
soup wrap:
You can do it in three steps, assuming tout is your data frame:
> library(data.table)
> tout <- as.data.table(tout)
> setkey(tout, ProductID)
> cart <- tout[tout, allow.cartesian = TRUE]
ProductID Id Price Index Id.1 Price.1 Index.1
1: 1 1 1 1 1 1 1
2: 1 10 1 2 1 1 1
3: 1 21 1 3 1 1 1
4: 1 34 1 4 1 1 1
5: 1 1 1 1 10 1 2
---
168: 14 46 11 4 33 11 3
169: 14 33 11 3 46 11 4
170: 14 46 11 4 46 11 4
171: 15 47 12 4 47 12 4
172: 16 48 12 4 48 12 4
Now cart is a cartesian product of tout by itself, using ProductID as the key.
> x <- cart[, sum(Price), by = list(Index, Index.1)]
Index Index.1 V1
1: 1 1 45
2: 2 1 45
3: 3 1 45
4: 4 1 45
5: 1 2 45
6: 2 2 66
7: 3 2 66
8: 4 2 66
9: 1 3 45
10: 2 3 66
11: 3 3 88
12: 4 3 88
13: 1 4 45
14: 2 4 66
15: 3 4 88
16: 4 4 112
x is almost what you need, but in a data table (long) form. You need to cast to matrix (wide) form with the help of avast from reshape2 package:
> library(reshape2)
> a <- acast(x, Index ~ Index.1, value.var = "V1")
1 2 3 4
1 45 45 45 45
2 45 66 66 66
3 45 66 88 88
4 45 66 88 112
Finally, to set upper triangular part of the matrix to NA:
> a[upper.tri(a)] <- NA
1 2 3 4
1 45 NA NA NA
2 45 66 NA NA
3 45 66 88 NA
4 45 66 88 112
qid & accept id:
(20454604, 20456616)
query:
PL/SQL help. How to write a anonymous block that inserts 100 new rows
soup:
Your insert statement should look like this:
\nINSERT INTO emp2 \n( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\nVALUES \n( i, 'Fname', 'Lname', sysdate, 100, 10 );\n
\nYou need to add an IF statement for the part "also add code that inserts placeholders in the first_name and last_name columns for employee ID 2000". Like this:
\nIF i = 2000\nTHEN\n INSERT INTO emp2 \n ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\n VALUES \n ( i, 'Fname ' || i, 'Lname ' || i, sysdate, 100, 10 );\nELSE\n INSERT INTO emp2 \n ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\n VALUES \n ( i, 'Fname', 'Lname', sysdate, 100, 10 );\nEND IF;\n
\n
soup wrap:
Your insert statement should look like this:
INSERT INTO emp2
( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
VALUES
( i, 'Fname', 'Lname', sysdate, 100, 10 );
You need to add an IF statement for the part "also add code that inserts placeholders in the first_name and last_name columns for employee ID 2000". Like this:
IF i = 2000
THEN
INSERT INTO emp2
( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
VALUES
( i, 'Fname ' || i, 'Lname ' || i, sysdate, 100, 10 );
ELSE
INSERT INTO emp2
( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
VALUES
( i, 'Fname', 'Lname', sysdate, 100, 10 );
END IF;
qid & accept id:
(20510051, 20511184)
query:
Getting the $rank variable and updating it in the table
soup:
Instead of constantly hitting the database with multiple queries, consider to do it at once like this
\nUPDATE bank t JOIN \n(\n SELECT id, bankaccount, \n (\n SELECT COUNT(*)\n FROM bank\n WHERE id = b.id\n AND bankbalance > b.bankbalance\n ) + 1 rank\n FROM bank b\n WHERE id = 1\n) s \n ON t.id = s.id\n AND t.bankaccount = s.bankaccount\n SET t.bankaccountranking = rank;\n
\nHere is SQLFiddle demo
\nor with two statements, leveraging user variables and ORDER BY in UPDATE
\nSET @rnum = 0;\nUPDATE bank\n SET bankaccountranking = (@rnum := @rnum + 1)\n WHERE id = 1\n ORDER BY bankbalance DESC;\n
\nHere is SQLFiddle demo
\n
\nNow php code might look like this
\n$sessionid = $_SESSION['uid'];\n\n$sql = "UPDATE bank t JOIN \n(\n SELECT id, bankaccount, \n (\n SELECT COUNT(*)\n FROM bank\n WHERE id = b.id\n AND bankbalance > b.bankbalance\n ) + 1 rank\n FROM bank b\n WHERE id = :id\n) s \n ON t.id = s.id\n AND t.bankaccount = s.bankaccount\n SET t.bankaccountranking = rank;";\n\n$stmt = $conn->prepare($sql);\n$stmt->bindParam(':id', $sessionid , PDO::PARAM_INT);\n$stmt->execute();\n
\n
\nUPDATE: to implement equivalent of DENSE_RANK() analytic function with a subquery you can do
\nUPDATE bank t JOIN \n(\n SELECT id, bankaccount, \n (\n SELECT COUNT(DISTINCT bankbalance)\n FROM bank\n WHERE id = b.id\n AND bankbalance > b.bankbalance\n ) + 1 rank\n FROM bank b\n WHERE id = 1\n) s \n ON t.id = s.id\n AND t.bankaccount = s.bankaccount\n SET t.bankaccountranking = rank;\n
\nHere is SQLFiddle demo
\nor with user(session) variables
\nSET @r = 0, @b = NULL; \nUPDATE bank b JOIN\n(\n SELECT id, bankaccount, @r := IF(@b = bankbalance, @r, @r + 1) rank, @b := bankbalance\n FROM bank\n WHERE id = 1\n ORDER BY bankbalance DESC\n) s\n ON b.id = s.id\n AND b.bankaccount = s.bankaccount\n SET bankaccountranking = rank;\n
\nHere is SQLFiddle demo
\n
soup wrap:
Instead of constantly hitting the database with multiple queries, consider to do it at once like this
UPDATE bank t JOIN
(
SELECT id, bankaccount,
(
SELECT COUNT(*)
FROM bank
WHERE id = b.id
AND bankbalance > b.bankbalance
) + 1 rank
FROM bank b
WHERE id = 1
) s
ON t.id = s.id
AND t.bankaccount = s.bankaccount
SET t.bankaccountranking = rank;
Here is SQLFiddle demo
or with two statements, leveraging user variables and ORDER BY in UPDATE
SET @rnum = 0;
UPDATE bank
SET bankaccountranking = (@rnum := @rnum + 1)
WHERE id = 1
ORDER BY bankbalance DESC;
Here is SQLFiddle demo
Now php code might look like this
$sessionid = $_SESSION['uid'];
$sql = "UPDATE bank t JOIN
(
SELECT id, bankaccount,
(
SELECT COUNT(*)
FROM bank
WHERE id = b.id
AND bankbalance > b.bankbalance
) + 1 rank
FROM bank b
WHERE id = :id
) s
ON t.id = s.id
AND t.bankaccount = s.bankaccount
SET t.bankaccountranking = rank;";
$stmt = $conn->prepare($sql);
$stmt->bindParam(':id', $sessionid , PDO::PARAM_INT);
$stmt->execute();
UPDATE: to implement equivalent of DENSE_RANK() analytic function with a subquery you can do
UPDATE bank t JOIN
(
SELECT id, bankaccount,
(
SELECT COUNT(DISTINCT bankbalance)
FROM bank
WHERE id = b.id
AND bankbalance > b.bankbalance
) + 1 rank
FROM bank b
WHERE id = 1
) s
ON t.id = s.id
AND t.bankaccount = s.bankaccount
SET t.bankaccountranking = rank;
Here is SQLFiddle demo
or with user(session) variables
SET @r = 0, @b = NULL;
UPDATE bank b JOIN
(
SELECT id, bankaccount, @r := IF(@b = bankbalance, @r, @r + 1) rank, @b := bankbalance
FROM bank
WHERE id = 1
ORDER BY bankbalance DESC
) s
ON b.id = s.id
AND b.bankaccount = s.bankaccount
SET bankaccountranking = rank;
Here is SQLFiddle demo
qid & accept id:
(20519532, 20525814)
query:
Storing multiple tags in one column
soup:
This is a good case for a bridge table. Let's say you have in your database:
\nfile_info\n---------\nfile_id\nauthor\ncreate_date\n\ntag_info\n--------\ntag_id\ntag_name\n
\ntag_id is a surrogate key, and would be a unique, incrementing value for each new tag. So it may look like:
\ntag_id tag_name\n------ --------\n 1 Apples\n 2 Pears\n 3 Peaches\n
\nYou then create the bridge, which links files to the applicable tags:
\nfile_tag_bridge\n---------------\nfile_id\ntag_id\n
\nThe combination of file_id/tag_id will be unique in the table (it is a compound key), but a given file_id may be associated with multiple (different) tag_id, and vice-versa.
\nYou will have one row in this table for each tag associated with a file:
\nfile_id tag_id\n------- ------\n 1 1\n 2 2\n 2 3\n
\nIn this case, file 1 is associated with the Apples tag; file 2 is associated with Pears and Peaches. File 3 is not associated with any tags, and therefore is not represented in the bridge table.
\n
soup wrap:
This is a good case for a bridge table. Let's say you have in your database:
file_info
---------
file_id
author
create_date
tag_info
--------
tag_id
tag_name
tag_id is a surrogate key, and would be a unique, incrementing value for each new tag. So it may look like:
tag_id tag_name
------ --------
1 Apples
2 Pears
3 Peaches
You then create the bridge, which links files to the applicable tags:
file_tag_bridge
---------------
file_id
tag_id
The combination of file_id/tag_id will be unique in the table (it is a compound key), but a given file_id may be associated with multiple (different) tag_id, and vice-versa.
You will have one row in this table for each tag associated with a file:
file_id tag_id
------- ------
1 1
2 2
2 3
In this case, file 1 is associated with the Apples tag; file 2 is associated with Pears and Peaches. File 3 is not associated with any tags, and therefore is not represented in the bridge table.
qid & accept id:
(20546039, 20546101)
query:
Create custom field in SELECT if other field is null
soup:
Use CASE instead of IF:
\nSELECT \n FIRST_NAME,\n LAST_NAME,\n ULTIMATE_PARENT_NAME, \n CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT\nFROM (\n SELECT DISTINCT \n A.FIRST_NAME,\n A.LAST_NAME,\n B.LOCATION_ACCOUNT_ID,\n A.ULTIMATE_PARENT_NAME\n FROM ACTIVE_ACCOUNTS A,\n QL_ASSETS B\n WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+)\n
\nYou should also use LEFT JOIN syntax instead of the old (+) syntax (but that's more of a style choice in this case - it does not change the result):
\nSELECT \n FIRST_NAME,\n LAST_NAME,\n ULTIMATE_PARENT_NAME, \n CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT\nFROM (\n SELECT DISTINCT \n A.FIRST_NAME,\n A.LAST_NAME,\n B.LOCATION_ACCOUNT_ID,\n A.ULTIMATE_PARENT_NAME\n FROM ACTIVE_ACCOUNTS A\n LEFT JOIN QL_ASSETS B\n ON A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID\n )\n
\nIn fact, since you aren't using any of the columns from B in your result (only checking for existence) you can just use EXISTS:
\nSELECT \n FIRST_NAME,\n LAST_NAME,\n ULTIMATE_PARENT_NAME, \n CASE WHEN EXISTS(SELECT NULL \n FROM QL_ASSETS \n WHERE LOCATION_ACCOUNT_ID = A.ACCOUNT_ID)\n THEN 'Y' \n ELSE '' \n END AS IMPACT\n FROM ACTIVE_ACCOUNTS A\n
\n
soup wrap:
Use CASE instead of IF:
SELECT
FIRST_NAME,
LAST_NAME,
ULTIMATE_PARENT_NAME,
CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT
FROM (
SELECT DISTINCT
A.FIRST_NAME,
A.LAST_NAME,
B.LOCATION_ACCOUNT_ID,
A.ULTIMATE_PARENT_NAME
FROM ACTIVE_ACCOUNTS A,
QL_ASSETS B
WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+)
You should also use LEFT JOIN syntax instead of the old (+) syntax (but that's more of a style choice in this case - it does not change the result):
SELECT
FIRST_NAME,
LAST_NAME,
ULTIMATE_PARENT_NAME,
CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT
FROM (
SELECT DISTINCT
A.FIRST_NAME,
A.LAST_NAME,
B.LOCATION_ACCOUNT_ID,
A.ULTIMATE_PARENT_NAME
FROM ACTIVE_ACCOUNTS A
LEFT JOIN QL_ASSETS B
ON A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID
)
In fact, since you aren't using any of the columns from B in your result (only checking for existence) you can just use EXISTS:
SELECT
FIRST_NAME,
LAST_NAME,
ULTIMATE_PARENT_NAME,
CASE WHEN EXISTS(SELECT NULL
FROM QL_ASSETS
WHERE LOCATION_ACCOUNT_ID = A.ACCOUNT_ID)
THEN 'Y'
ELSE ''
END AS IMPACT
FROM ACTIVE_ACCOUNTS A
qid & accept id:
(20551358, 20552934)
query:
Use the value an XML element as a variable for a procedure
soup:
If your question is about how to read from an XML file, here is an example.
\nAssuming this is declared:
\nDim xml = \n DBUser2 \n \n N127.0.0.1\CESSQL \n Marino \n \n \n \n \n
\nIt's just one line of code:
\nxml.Element("ServerDatabase").Value\n
\nOr, to keep your variable names:
\nDim ServerDatabaseValue As String = xml.Element("ServerDatabase").Value\n
\nAlways specify variable types. To help you with that, you can set Option Strict On and Option Infer Off in your project settings. This can improve your code quality by forcing you into certain (good) development habits.
\n
soup wrap:
If your question is about how to read from an XML file, here is an example.
Assuming this is declared:
Dim xml =
DBUser2
N127.0.0.1\CESSQL
Marino
It's just one line of code:
xml.Element("ServerDatabase").Value
Or, to keep your variable names:
Dim ServerDatabaseValue As String = xml.Element("ServerDatabase").Value
Always specify variable types. To help you with that, you can set Option Strict On and Option Infer Off in your project settings. This can improve your code quality by forcing you into certain (good) development habits.
qid & accept id:
(20589984, 20591149)
query:
listing SQL table's rows in text file
soup:
In order to get the fieldnames, you would have to write something like this
\n for I := 0 to ADODataSet.FieldCount - 1 do \n Write (WOLFile,ADODataSet.Fields[I].displayname);\n writeln (WOLFile);\n
\nOutput the data only with 'write', so that all the column names appear in the same line, then open a new line with 'writeln'.
\nThen you can add your code which iterates over the table. Here's the entire code:
\nwith ADODataSet do\n begin\n for i:= 0 to fieldcount - 1 do write (WOLFile, Fields[I].displayname);\n writeln (WOLFile);\n first;\n while not eof do\n begin\n for I := 0 to FieldCount - 1 do Write (WOLFile, Fields[I].AsString);\n writeln (WOLFile);\n next\n end;\n end;\n end;\n
\nThe columns probably won't left align correctly, but I'll leave that little problem up to you.
\nPeople here don't like the use of the 'with' construct but I don't see any problem in this snippet.
\nYou could also save the output in a stringlist then write the stringlist to a file at the end, instead of using write and writeln. In order to do that, you would have to concatenate the values of each 'for i' loop into a local variable then add that variable to the stringlist. If you add each value to be printed directly to the stringlist, then every value will appear on a separate line.
\n
soup wrap:
In order to get the fieldnames, you would have to write something like this
for I := 0 to ADODataSet.FieldCount - 1 do
Write (WOLFile,ADODataSet.Fields[I].displayname);
writeln (WOLFile);
Output the data only with 'write', so that all the column names appear in the same line, then open a new line with 'writeln'.
Then you can add your code which iterates over the table. Here's the entire code:
with ADODataSet do
begin
for i:= 0 to fieldcount - 1 do write (WOLFile, Fields[I].displayname);
writeln (WOLFile);
first;
while not eof do
begin
for I := 0 to FieldCount - 1 do Write (WOLFile, Fields[I].AsString);
writeln (WOLFile);
next
end;
end;
end;
The columns probably won't left align correctly, but I'll leave that little problem up to you.
People here don't like the use of the 'with' construct but I don't see any problem in this snippet.
You could also save the output in a stringlist then write the stringlist to a file at the end, instead of using write and writeln. In order to do that, you would have to concatenate the values of each 'for i' loop into a local variable then add that variable to the stringlist. If you add each value to be printed directly to the stringlist, then every value will appear on a separate line.
qid & accept id:
(20637482, 20637609)
query:
Pivot without aggregate - again
soup:
Based on your sample data, you can easily get the result using an aggregate function with a CASE expression:
\nselect userlicenseid,\n startdate,\n max(case when name = 'Other' then value end) Other,\n max(case when name = 'Pathways' then value end) Pathways,\n max(case when name = 'Execution' then value end) Execution,\n max(case when name = 'Focus' then value end) Focus,\n max(case when name = 'Profit' then value end) Profit\nfrom yourtable\ngroup by userlicenseid, startdate;\n
\nSee SQL Fiddle with Demo. Since you are converting string values into columns, then you will want to use either the min() or max() aggregate.
\nYou could use the PIVOT function to get the result as well:
\nselect userlicenseid, startdate,\n Other, Pathways, Execution, Focus, Profit\nfrom\n(\n select userlicenseid, startdate,\n name, value\n from yourtable\n) d\npivot\n(\n max(value)\n for name in (Other, Pathways, Execution, Focus, Profit)\n) piv;\n
\n\n
soup wrap:
Based on your sample data, you can easily get the result using an aggregate function with a CASE expression:
select userlicenseid,
startdate,
max(case when name = 'Other' then value end) Other,
max(case when name = 'Pathways' then value end) Pathways,
max(case when name = 'Execution' then value end) Execution,
max(case when name = 'Focus' then value end) Focus,
max(case when name = 'Profit' then value end) Profit
from yourtable
group by userlicenseid, startdate;
See SQL Fiddle with Demo. Since you are converting string values into columns, then you will want to use either the min() or max() aggregate.
You could use the PIVOT function to get the result as well:
select userlicenseid, startdate,
Other, Pathways, Execution, Focus, Profit
from
(
select userlicenseid, startdate,
name, value
from yourtable
) d
pivot
(
max(value)
for name in (Other, Pathways, Execution, Focus, Profit)
) piv;
qid & accept id:
(20643084, 20643411)
query:
Combine sql rows into additional columns
soup:
Of course PostgreSQL supports a pivot function. Use crosstab() from the additional module tablefunc. It's up for debate whether that's "native" or not.
\nRun once per database:
\nCREATE EXTENSION tablefunc;\n
\nAnd consider this detailed explanation:
\nPostgreSQL Crosstab Query
\nHowever, what you are trying to do is the opposite of a pivot function! A counter-pivot. I would use UNION ALL:
\nSELECT item_name, 'store_A'::text AS store, store_a AS quantity\nFROM stock_usage\n\nUNION ALL\nSELECT item_name, 'store_B'::text, store_b\nFROM stock_usage\n\nUNION ALL\nSELECT item_name, 'store_C'::text, store_c\nFROM stock_usage\n\n...\n
\n
soup wrap:
Of course PostgreSQL supports a pivot function. Use crosstab() from the additional module tablefunc. It's up for debate whether that's "native" or not.
Run once per database:
CREATE EXTENSION tablefunc;
And consider this detailed explanation:
PostgreSQL Crosstab Query
However, what you are trying to do is the opposite of a pivot function! A counter-pivot. I would use UNION ALL:
SELECT item_name, 'store_A'::text AS store, store_a AS quantity
FROM stock_usage
UNION ALL
SELECT item_name, 'store_B'::text, store_b
FROM stock_usage
UNION ALL
SELECT item_name, 'store_C'::text, store_c
FROM stock_usage
...
qid & accept id:
(20659137, 20660749)
query:
MySQL: Getting the highest number of a combination of two fields
soup:
2 SQL-Statments, the 2nd should do it...
\n\nSELECT\n AA.user, AA.tone, AA.color, MAX(AA.toneCounter) as toneCounter\nFROM (\n SELECT\n user, tone, color, COUNT(tone) as toneCounter\n FROM\n experiments\n LEFT JOIN\n pairings\n ON\n experiments.experimentId = pairings.experimentId \n GROUP BY\n user, tone, color\n) AA\nGroup by\n AA.user, AA.tone\n
\n... my answer did not satisfy myself and I doublechecked it. And I think the next answer is more adequate (and even runs on no-mysql)
\n\nSELECT \n AAA.user, AAA.tone, BBB.color, AAA.toneCounter \nFROM (\n SELECT\n AA.user, AA.tone, MAX(AA.toneCounter) as toneCounter\n FROM (\n SELECT\n user, tone, color, COUNT(tone) as toneCounter\n FROM\n experiments\n LEFT JOIN\n pairings\n ON\n experiments.experimentId = pairings.experimentId \n GROUP BY\n user, tone, color\n ) AA\n Group by\n AA.user, AA.tone\n) AAA\njoin (\n SELECT\n BB.user, BB.tone, BB.color, MAX(BB.toneCounter) as toneCounter\n FROM (\n SELECT\n user, tone, color, COUNT(tone) as toneCounter\n FROM\n experiments\n LEFT JOIN\n pairings\n ON\n experiments.experimentId = pairings.experimentId \n GROUP BY\n user, tone, color\n ) BB\n Group by\n BB.user, BB.tone, BB.color \n) BBB\nON\n BBB.user = AAA.user\n AND BBB.tone = AAA.tone \n AND BBB.toneCounter = AAA.toneCounter \n
\n
soup wrap:
2 SQL-Statments, the 2nd should do it...
SELECT
AA.user, AA.tone, AA.color, MAX(AA.toneCounter) as toneCounter
FROM (
SELECT
user, tone, color, COUNT(tone) as toneCounter
FROM
experiments
LEFT JOIN
pairings
ON
experiments.experimentId = pairings.experimentId
GROUP BY
user, tone, color
) AA
Group by
AA.user, AA.tone
... my answer did not satisfy myself and I doublechecked it. And I think the next answer is more adequate (and even runs on no-mysql)
SELECT
AAA.user, AAA.tone, BBB.color, AAA.toneCounter
FROM (
SELECT
AA.user, AA.tone, MAX(AA.toneCounter) as toneCounter
FROM (
SELECT
user, tone, color, COUNT(tone) as toneCounter
FROM
experiments
LEFT JOIN
pairings
ON
experiments.experimentId = pairings.experimentId
GROUP BY
user, tone, color
) AA
Group by
AA.user, AA.tone
) AAA
join (
SELECT
BB.user, BB.tone, BB.color, MAX(BB.toneCounter) as toneCounter
FROM (
SELECT
user, tone, color, COUNT(tone) as toneCounter
FROM
experiments
LEFT JOIN
pairings
ON
experiments.experimentId = pairings.experimentId
GROUP BY
user, tone, color
) BB
Group by
BB.user, BB.tone, BB.color
) BBB
ON
BBB.user = AAA.user
AND BBB.tone = AAA.tone
AND BBB.toneCounter = AAA.toneCounter
qid & accept id:
(20661535, 20711214)
query:
How to Group time segments and check break time
soup:
i believe if you want to combine both times you need to take them out of the group by and add sum them. based on the results the reporting can check total hours and break hours. you can add case statements if you want to flag them.
\nSELECT ftc.lEmployeeID\n ,ftc.sFirstName\n ,ftc.sLastName\n ,SUM(ftc.TotalHours) AS TotalHours\n ,DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS BreakTimeMinutes\nFROM dbo.fTimeCard(@StartDate, @EndDate,\n @DeptList, @iActive,@ EmployeeList) AS ftc\nWHERE SUM(ftc.TotalHours) >= 0 AND (ftc.DID IS NOT NULL) OR\n (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL)\nGROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName\n
\nI made this quick test in sql and it appears to work the way you want. did you add something to the group by?
\ndeclare @table table (emp_id int,name varchar(4), tin time,tout time);\n\ninsert into @table\nVALUES (1,'d','8:30:00','11:35:00'),\n (1,'d','13:00:00','17:00:00');\n\n\nSELECT t.emp_id\n ,t.name\n ,SUM(DATEDIFF(mi, tin,tout))/60 as hours\n ,DATEDIFF(mi, MIN(tout), MAX(tin)) AS BreakTimeMinutes\nFROM @table t\n\nGROUP BY t.emp_id, t.name\n
\n
soup wrap:
i believe if you want to combine both times you need to take them out of the group by and add sum them. based on the results the reporting can check total hours and break hours. you can add case statements if you want to flag them.
SELECT ftc.lEmployeeID
,ftc.sFirstName
,ftc.sLastName
,SUM(ftc.TotalHours) AS TotalHours
,DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS BreakTimeMinutes
FROM dbo.fTimeCard(@StartDate, @EndDate,
@DeptList, @iActive,@ EmployeeList) AS ftc
WHERE SUM(ftc.TotalHours) >= 0 AND (ftc.DID IS NOT NULL) OR
(ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL)
GROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName
I made this quick test in sql and it appears to work the way you want. did you add something to the group by?
declare @table table (emp_id int,name varchar(4), tin time,tout time);
insert into @table
VALUES (1,'d','8:30:00','11:35:00'),
(1,'d','13:00:00','17:00:00');
SELECT t.emp_id
,t.name
,SUM(DATEDIFF(mi, tin,tout))/60 as hours
,DATEDIFF(mi, MIN(tout), MAX(tin)) AS BreakTimeMinutes
FROM @table t
GROUP BY t.emp_id, t.name
qid & accept id:
(20707736, 20708437)
query:
Access query to return several similar records when one is flagged
soup:
This query should give you a list of unique priStkCode values for which at least one row exists with False in priPriceConfirmed.
\nSELECT DISTINCT priStkCode\nFROM tblPriData\nWHERE priPriceConfirmed = False;\n
\nThen you can select the matching tblPriData rows with an INNER JOIN to that query.
\nSELECT pd.*\nFROM\n tblPriData AS pd\n INNER JOIN\n (\n SELECT DISTINCT priStkCode\n FROM tblPriData\n WHERE priPriceConfirmed = False\n ) AS sub\n ON pd.priStkCode = sub.priStkCode;\n
\n
soup wrap:
This query should give you a list of unique priStkCode values for which at least one row exists with False in priPriceConfirmed.
SELECT DISTINCT priStkCode
FROM tblPriData
WHERE priPriceConfirmed = False;
Then you can select the matching tblPriData rows with an INNER JOIN to that query.
SELECT pd.*
FROM
tblPriData AS pd
INNER JOIN
(
SELECT DISTINCT priStkCode
FROM tblPriData
WHERE priPriceConfirmed = False
) AS sub
ON pd.priStkCode = sub.priStkCode;
qid & accept id:
(20757884, 20758116)
query:
Calculate last days of months for given period in SQL Server
soup:
The easiest option is to have a calendar table, with a last day of the month flag, so your query would simply be:
\nSELECT *\nFROM dbo.Calendar\nWHERE Date >= @StartDate\nAND Date <= @EndDate\nAND EndOfMonth = 1;\n
\nAssuming of course that you don't have a calendar table you can generate a list of dates on the fly:'
\nDECLARE @s_date DATE = '20130101',\n @e_date DATE = '20130601';\n\nSELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)\nFROM sys.all_objects;\n
\nThen once you have your dates you can limit them to where the date is the last day of the month (where adding one day makes it the first of the month):
\nDECLARE @s_date DATE = '20130101',\n @e_date DATE = '20130601';\n\nWITH Dates AS\n( SELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)\n FROM sys.all_objects\n)\nSELECT *\nFROM Dates\nWHERE Date <= @e_Date\nAND DATEPART(DAY, DATEADD(DAY, 1, Date)) = 1;\n
\n\n
soup wrap:
The easiest option is to have a calendar table, with a last day of the month flag, so your query would simply be:
SELECT *
FROM dbo.Calendar
WHERE Date >= @StartDate
AND Date <= @EndDate
AND EndOfMonth = 1;
Assuming of course that you don't have a calendar table you can generate a list of dates on the fly:'
DECLARE @s_date DATE = '20130101',
@e_date DATE = '20130601';
SELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)
FROM sys.all_objects;
Then once you have your dates you can limit them to where the date is the last day of the month (where adding one day makes it the first of the month):
DECLARE @s_date DATE = '20130101',
@e_date DATE = '20130601';
WITH Dates AS
( SELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)
FROM sys.all_objects
)
SELECT *
FROM Dates
WHERE Date <= @e_Date
AND DATEPART(DAY, DATEADD(DAY, 1, Date)) = 1;
qid & accept id:
(20783264, 20783422)
query:
Making select and delete queries as single statement
soup:
Try this:
\nDELETE FROM posts \nWHERE id IN (SELECT id \n FROM (SELECT post_title, MAX(id) id \n FROM posts \n WHERE post_title IN ('abc', 'xyz') \n GROUP BY post_title \n ) A \n )\n
\nOR
\nDELETE FROM posts \nWHERE id IN (SELECT id \n FROM (SELECT post_title, id \n FROM posts \n WHERE post_title IN ('abc', 'xyz') \n ORDER BY post_title, id DESC\n ) A \n GROUP BY post_title)\n
\n
soup wrap:
Try this:
DELETE FROM posts
WHERE id IN (SELECT id
FROM (SELECT post_title, MAX(id) id
FROM posts
WHERE post_title IN ('abc', 'xyz')
GROUP BY post_title
) A
)
OR
DELETE FROM posts
WHERE id IN (SELECT id
FROM (SELECT post_title, id
FROM posts
WHERE post_title IN ('abc', 'xyz')
ORDER BY post_title, id DESC
) A
GROUP BY post_title)
qid & accept id:
(20792891, 20792959)
query:
Selecting Spicific data placed in the middle of the database table
soup:
I guess you can make use of ROW_NUMBER Function something like this ....
\n;WITH OrderedData\n AS\n (\n SELECT * , rn = ROW_NUMBER() OVER (ORDER BY SomeColumn)\n FROM Table_Name\n )\nSELECT * FROM OrderedData\nWHERE rn >= @LowerLimit AND rn <= @UpperLimit\n
\nYour Query
\nselect * from articles \nwhere articleid between @indexOfSelection AND @LimitOfselection\n
\nYou just need to add the key word AND between your upper lower limit variable and upper limit variable.
\nYour Stored Procedure
\nCREATE PROCEDURE ordered_articles \n@LowerBound int, \n@UpperBound int \nAS \nBEGIN\n SET NOCOUNT ON;\n select * from articles \n where articleid between @LowerBound and @UpperBound \nEND\n
\nTo Select A range Of Rows
\nCREATE PROCEDURE ordered_articles \n@LowerBound int, \n@UpperBound int \nAS \nBEGIN\n SET NOCOUNT ON;\nWITH OrderedData\nAS\n (\n SELECT * , rn = ROW_NUMBER() OVER (ORDER BY articleid)\n FROM articles\n )\nSELECT * FROM OrderedData\nWHERE rn >= @LowerBound AND rn <= @UpperBound\n\nEND\n\n EXECUTE ordered_articles 10, 15 --<-- this will return 10 to 15 number row ordered by ArticleID\n
\n
soup wrap:
I guess you can make use of ROW_NUMBER Function something like this ....
;WITH OrderedData
AS
(
SELECT * , rn = ROW_NUMBER() OVER (ORDER BY SomeColumn)
FROM Table_Name
)
SELECT * FROM OrderedData
WHERE rn >= @LowerLimit AND rn <= @UpperLimit
Your Query
select * from articles
where articleid between @indexOfSelection AND @LimitOfselection
You just need to add the key word AND between your upper lower limit variable and upper limit variable.
Your Stored Procedure
CREATE PROCEDURE ordered_articles
@LowerBound int,
@UpperBound int
AS
BEGIN
SET NOCOUNT ON;
select * from articles
where articleid between @LowerBound and @UpperBound
END
To Select A range Of Rows
CREATE PROCEDURE ordered_articles
@LowerBound int,
@UpperBound int
AS
BEGIN
SET NOCOUNT ON;
WITH OrderedData
AS
(
SELECT * , rn = ROW_NUMBER() OVER (ORDER BY articleid)
FROM articles
)
SELECT * FROM OrderedData
WHERE rn >= @LowerBound AND rn <= @UpperBound
END
EXECUTE ordered_articles 10, 15 --<-- this will return 10 to 15 number row ordered by ArticleID
qid & accept id:
(20794860, 20795034)
query:
regex in SQL to detect one or more digit
soup:
Use REGEXP operator instead of LIKE operator
\nTry this:
\nSELECT '129387 store' REGEXP '^[0-9]* store$';\n\nSELECT * FROM shop WHERE `name` REGEXP '^[0-9]+ store$';\n
\nCheck the SQL FIDDLE DEMO
\nOUTPUT
\n| NAME |\n|--------------|\n| 129387 store |\n
\n
soup wrap:
Use REGEXP operator instead of LIKE operator
Try this:
SELECT '129387 store' REGEXP '^[0-9]* store$';
SELECT * FROM shop WHERE `name` REGEXP '^[0-9]+ store$';
Check the SQL FIDDLE DEMO
OUTPUT
| NAME |
|--------------|
| 129387 store |
qid & accept id:
(20830315, 20831605)
query:
How to calculate running total (month to date) in SQL Server 2008
soup:
\nA running total is the summation of a sequence of numbers which is\n updated each time a new number is added to the sequence, simply by\n adding the value of the new number to the running total.
\n
\nI THINK He wants a running total for Month by each Representative_Id, so a simple group by week isn't enough. He probably wants his Month_To_Date_Activities_Count to be updated at the end of every week.
\nThis query gives a running total (month to end-of-week date) ordered by Representative_Id, Week
\nSELECT a.Representative_ID, l.month, l.Week, Count(*) AS Total_Week_Activity_Count\n ,(SELECT count(*)\n FROM ACTIVITIES_FACT a2\n INNER JOIN LU_TIME l2 ON a2.Date = l2.Date\n AND a.Representative_ID = a2.Representative_ID\n WHERE l2.week <= l.week\n AND l2.month = l.month) Month_To_Date_Activities_Count\nFROM ACTIVITIES_FACT a\nINNER JOIN LU_TIME l ON a.Date = l.Date\nGROUP BY a.Representative_ID, l.Week, l.month\nORDER BY a.Representative_ID, l.Week\n
\n
\n| REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT |\n|-------------------|-------|------|---------------------------|--------------------------------|\n| 40 | 7 | 7/08 | 1 | 1 |\n| 40 | 8 | 8/09 | 1 | 1 |\n| 40 | 8 | 8/10 | 1 | 2 |\n| 41 | 7 | 7/08 | 2 | 2 |\n| 41 | 8 | 8/08 | 4 | 4 |\n| 41 | 8 | 8/09 | 3 | 7 |\n| 41 | 8 | 8/10 | 1 | 8 |\n
\n\n
soup wrap:
A running total is the summation of a sequence of numbers which is
updated each time a new number is added to the sequence, simply by
adding the value of the new number to the running total.
I THINK He wants a running total for Month by each Representative_Id, so a simple group by week isn't enough. He probably wants his Month_To_Date_Activities_Count to be updated at the end of every week.
This query gives a running total (month to end-of-week date) ordered by Representative_Id, Week
SELECT a.Representative_ID, l.month, l.Week, Count(*) AS Total_Week_Activity_Count
,(SELECT count(*)
FROM ACTIVITIES_FACT a2
INNER JOIN LU_TIME l2 ON a2.Date = l2.Date
AND a.Representative_ID = a2.Representative_ID
WHERE l2.week <= l.week
AND l2.month = l.month) Month_To_Date_Activities_Count
FROM ACTIVITIES_FACT a
INNER JOIN LU_TIME l ON a.Date = l.Date
GROUP BY a.Representative_ID, l.Week, l.month
ORDER BY a.Representative_ID, l.Week
| REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT |
|-------------------|-------|------|---------------------------|--------------------------------|
| 40 | 7 | 7/08 | 1 | 1 |
| 40 | 8 | 8/09 | 1 | 1 |
| 40 | 8 | 8/10 | 1 | 2 |
| 41 | 7 | 7/08 | 2 | 2 |
| 41 | 8 | 8/08 | 4 | 4 |
| 41 | 8 | 8/09 | 3 | 7 |
| 41 | 8 | 8/10 | 1 | 8 |
qid & accept id:
(20838921, 20838992)
query:
Add values from the previous row of one column to another column in current row
soup:
You can use OUTER APPLY:
\nCREATE TABLE #T (Amount INT);\nINSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);\n\nSELECT T.Amount, T2.Amount\nFROM #T T\n OUTER APPLY\n ( SELECT Amount = SUM(Amount)\n FROM #T T2\n WHERE T2.Amount <= T.Amount\n ) T2;\n\nDROP TABLE #T;\n
\nOr a correlated subquery:
\nCREATE TABLE #T (Amount INT);\nINSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);\n\nSELECT T.Amount, \n ( SELECT Amount = SUM(Amount)\n FROM #T T2\n WHERE T2.Amount <= T.Amount\n ) \nFROM #T T\n\nDROP TABLE #T;\n
\nBoth should yield the same plan (In this case they are essentially the same and the IO is identical).
\n
\nRight, subtraction. Got there in the end, I will go through how I eventually got to the solution because it took me a while, it is not as straight forward as a cumulative sum..
\nFirst I just wrote out a query that was exactly what the logic was, essentially:
\nf(x) = x - f(x - 1);\n
\nSo by copy and pasting the formula from the previous line I got to:
\nSELECT [1] = 1,\n [2] = 2 - 1,\n [3] = 3 - (2 - 1),\n [4] = 4 - (3 - (2 - 1)),\n [5] = 5 - (4 - (3 - (2 - 1))),\n [6] = 6 - (5 - (4 - (3 - (2 - 1)))),\n [7] = 7 - (6 - (5 - (4 - (3 - (2 - 1)))));\n
\nI then expanded out all the parentheses to give:
\nSELECT [1] = 1,\n [2] = 2 - 1,\n [3] = 3 - 2 + 1,\n [4] = 4 - 3 + 2 - 1,\n [5] = 5 - 4 + 3 - 2 + 1,\n [6] = 6 - 5 + 4 - 3 + 2 - 1,\n [7] = 7 - 6 + 5 - 4 + 3 - 2 + 1;\n
\nAs you can see the operator alternates between + and - for each amount as you move down (i.e. for 5 you add the 3, for 6 you minus the 3, then for 7 you add it again).
\nThis means you need to find out the position of each value to work out whether or not to add or subtract it. So using this:
\nSELECT T.Amount, \n T2.RowNum,\n T2.Amount\nFROM #T T\n OUTER APPLY\n ( SELECT Amount, RowNum = ROW_NUMBER() OVER(ORDER BY Amount DESC)\n FROM #T T2\n WHERE T2.Amount < T.Amount\n ) T2\nWHERE T.Amount IN (4, 5)\n
\nYou end up with:
\nAmount RowNum Amount\n-------------------------\n4 1 3\n4 2 2\n4 3 1\n-------------------------\n5 1 4\n5 2 3\n5 3 2\n5 4 1\n
\nSo remembering the previous formala for these two:
\n[4] = 4 - 3 + 2 - 1,\n[5] = 5 - 4 + 3 - 2 + 1,\n
\nWe can see that where RowNum is odd we need to - the second amount, where it is even we need to add it. We can't use ROW_NUMBER() inside a SUM function, so we then need to perform a second aggregate, giving a final query of:
\nSELECT T.Amount, \n Subtraction = T.Amount - SUM(ISNULL(T2.Amount, 0))\nFROM #T T\n OUTER APPLY\n ( SELECT Amount = CASE WHEN ROW_NUMBER() OVER(ORDER BY Amount DESC) % 2 = 0 THEN -Amount ELSE Amount END\n FROM #T T2\n WHERE T2.Amount < T.Amount\n ) T2\nGROUP BY T.Amount;\n
\n\n
soup wrap:
You can use OUTER APPLY:
CREATE TABLE #T (Amount INT);
INSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);
SELECT T.Amount, T2.Amount
FROM #T T
OUTER APPLY
( SELECT Amount = SUM(Amount)
FROM #T T2
WHERE T2.Amount <= T.Amount
) T2;
DROP TABLE #T;
Or a correlated subquery:
CREATE TABLE #T (Amount INT);
INSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);
SELECT T.Amount,
( SELECT Amount = SUM(Amount)
FROM #T T2
WHERE T2.Amount <= T.Amount
)
FROM #T T
DROP TABLE #T;
Both should yield the same plan (In this case they are essentially the same and the IO is identical).
Right, subtraction. Got there in the end, I will go through how I eventually got to the solution because it took me a while, it is not as straight forward as a cumulative sum..
First I just wrote out a query that was exactly what the logic was, essentially:
f(x) = x - f(x - 1);
So by copy and pasting the formula from the previous line I got to:
SELECT [1] = 1,
[2] = 2 - 1,
[3] = 3 - (2 - 1),
[4] = 4 - (3 - (2 - 1)),
[5] = 5 - (4 - (3 - (2 - 1))),
[6] = 6 - (5 - (4 - (3 - (2 - 1)))),
[7] = 7 - (6 - (5 - (4 - (3 - (2 - 1)))));
I then expanded out all the parentheses to give:
SELECT [1] = 1,
[2] = 2 - 1,
[3] = 3 - 2 + 1,
[4] = 4 - 3 + 2 - 1,
[5] = 5 - 4 + 3 - 2 + 1,
[6] = 6 - 5 + 4 - 3 + 2 - 1,
[7] = 7 - 6 + 5 - 4 + 3 - 2 + 1;
As you can see the operator alternates between + and - for each amount as you move down (i.e. for 5 you add the 3, for 6 you minus the 3, then for 7 you add it again).
This means you need to find out the position of each value to work out whether or not to add or subtract it. So using this:
SELECT T.Amount,
T2.RowNum,
T2.Amount
FROM #T T
OUTER APPLY
( SELECT Amount, RowNum = ROW_NUMBER() OVER(ORDER BY Amount DESC)
FROM #T T2
WHERE T2.Amount < T.Amount
) T2
WHERE T.Amount IN (4, 5)
You end up with:
Amount RowNum Amount
-------------------------
4 1 3
4 2 2
4 3 1
-------------------------
5 1 4
5 2 3
5 3 2
5 4 1
So remembering the previous formala for these two:
[4] = 4 - 3 + 2 - 1,
[5] = 5 - 4 + 3 - 2 + 1,
We can see that where RowNum is odd we need to - the second amount, where it is even we need to add it. We can't use ROW_NUMBER() inside a SUM function, so we then need to perform a second aggregate, giving a final query of:
SELECT T.Amount,
Subtraction = T.Amount - SUM(ISNULL(T2.Amount, 0))
FROM #T T
OUTER APPLY
( SELECT Amount = CASE WHEN ROW_NUMBER() OVER(ORDER BY Amount DESC) % 2 = 0 THEN -Amount ELSE Amount END
FROM #T T2
WHERE T2.Amount < T.Amount
) T2
GROUP BY T.Amount;
qid & accept id:
(20922520, 20928271)
query:
Select data from rows into collection of oracle udt objects
soup:
\nOracle 11g R2 Schema Setup:
\nCREATE TABLE Test ( A, B, C, D, E ) AS\nSELECT LEVEL, LEVEL * 500, SQRT( LEVEL ), CHR( 64 + LEVEL ), RPAD( CHR( 64 + LEVEL ), 8, CHR( 64 + LEVEL ) )\nFROM DUAL\nCONNECT BY LEVEL <= 26\n/\n\nCREATE TYPE Test_Record AS OBJECT (\n A NUMBER,\n B NUMBER,\n C NUMBER,\n D CHAR(1),\n E CHAR(8)\n)\n/\n\nCREATE TYPE Test_Record_Table AS TABLE OF Test_Record\n/\n\nCREATE PROCEDURE get_Table_Of_Test_Records (\n p_records OUT Test_Record_Table\n)\nIS\nBEGIN\n SELECT Test_Record( A, B, C, D, E )\n BULK COLLECT INTO p_records\n FROM Test;\nEND get_Table_Of_Test_Records;\n/\n
\nQuery 1:
\nDECLARE\n trt Test_Record_Table;\nBEGIN\n get_Table_Of_Test_Records( trt );\n\n -- Do something with the collection.\nEND;\n
\n
soup wrap:
Oracle 11g R2 Schema Setup:
CREATE TABLE Test ( A, B, C, D, E ) AS
SELECT LEVEL, LEVEL * 500, SQRT( LEVEL ), CHR( 64 + LEVEL ), RPAD( CHR( 64 + LEVEL ), 8, CHR( 64 + LEVEL ) )
FROM DUAL
CONNECT BY LEVEL <= 26
/
CREATE TYPE Test_Record AS OBJECT (
A NUMBER,
B NUMBER,
C NUMBER,
D CHAR(1),
E CHAR(8)
)
/
CREATE TYPE Test_Record_Table AS TABLE OF Test_Record
/
CREATE PROCEDURE get_Table_Of_Test_Records (
p_records OUT Test_Record_Table
)
IS
BEGIN
SELECT Test_Record( A, B, C, D, E )
BULK COLLECT INTO p_records
FROM Test;
END get_Table_Of_Test_Records;
/
Query 1:
DECLARE
trt Test_Record_Table;
BEGIN
get_Table_Of_Test_Records( trt );
-- Do something with the collection.
END;
qid & accept id:
(20935221, 20935611)
query:
SQL - select a list of lists
soup:
How about
\nSELECT firstname, lastname, merge_id \nFROM table t\nORDER BY t.merge_id\n
\nThat would give you a record per person, and the merge_id will be ascending:
\n1 | Jane Doe \n1 | John Doe\n2 | max payne\n3 | sub zero\n
\nOtherwise, you can use GROUP_CONCAT:
\nSELECT merge_id , GROUP_CONCAT(CONCAT(firstname, ' ', lastname))\nFROM table t\nGROUP BY t.merge_id\nORDER BY t.merge_id\n
\nWhich will give one record per merge_id:
\n1 | Jane Doe, John Doe\n2 | max payne\n3 | sub zero\n
\n
soup wrap:
How about
SELECT firstname, lastname, merge_id
FROM table t
ORDER BY t.merge_id
That would give you a record per person, and the merge_id will be ascending:
1 | Jane Doe
1 | John Doe
2 | max payne
3 | sub zero
Otherwise, you can use GROUP_CONCAT:
SELECT merge_id , GROUP_CONCAT(CONCAT(firstname, ' ', lastname))
FROM table t
GROUP BY t.merge_id
ORDER BY t.merge_id
Which will give one record per merge_id:
1 | Jane Doe, John Doe
2 | max payne
3 | sub zero
qid & accept id:
(20935240, 20935527)
query:
Point exist in circle
soup:
Test Data
\nDECLARE @t TABLE (x NUMERIC(10,2), y NUMERIC(10,2), radius NUMERIC(10,2))\nINSERT INTO @t\nVALUES (3.5,3.5, 5.5),(20.5,20.5, 10.5), (30.5,30.5, 20.5)\n
\nQuery
\nDECLARE @p1 NUMERIC(10,2) = 5.5 --<-- Point to check\nDECLARE @p2 NUMERIC(10,2) = 5.5\n\n\nSELECT *, CASE WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) <= POWER(radius, 2)\n THEN 'Inside The Circle'\n WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) > POWER(radius, 2)\n THEN 'Outside the Circle' END [Inside/Outside]\nFROM @t\n
\nResult Set
\n╔═══════╦═══════╦════════╦════════════════════╗\n║ x ║ y ║ radius ║ Inside/Outside ║\n╠═══════╬═══════╬════════╬════════════════════╣\n║ 3.50 ║ 3.50 ║ 5.50 ║ Inside The Circle ║\n║ 20.50 ║ 20.50 ║ 10.50 ║ Outside the Circle ║\n║ 30.50 ║ 30.50 ║ 20.50 ║ Outside the Circle ║\n╚═══════╩═══════╩════════╩════════════════════╝\n
\nAs question was closed, could not add another answer, so I edited this to include solution using Sql Server Geometry types... [Uses same data points as above, plus one to demo exactly on the circle]
\nDeclare @t TABLE \n (x NUMERIC(10,2), y NUMERIC(10,2), \n radius NUMERIC(10,2))\nInsert @t\nValues (3.5,3.5, 5.5),(20.5,20.5, 10.5), \n (30.5,30.5, 20.5), (-5.5, 5.5, 11.0)\n\n-- --------------------------\nDeclare @pX float = 5.5 \nDeclare @pY float = 5.5\nDeclare @c geometry;\nDeclare @p geometry;\nSelect x, y, radius, \n (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0))\nFrom @T\nWhere (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0)) > radius\n
\n
soup wrap:
Test Data
DECLARE @t TABLE (x NUMERIC(10,2), y NUMERIC(10,2), radius NUMERIC(10,2))
INSERT INTO @t
VALUES (3.5,3.5, 5.5),(20.5,20.5, 10.5), (30.5,30.5, 20.5)
Query
DECLARE @p1 NUMERIC(10,2) = 5.5 --<-- Point to check
DECLARE @p2 NUMERIC(10,2) = 5.5
SELECT *, CASE WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) <= POWER(radius, 2)
THEN 'Inside The Circle'
WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) > POWER(radius, 2)
THEN 'Outside the Circle' END [Inside/Outside]
FROM @t
Result Set
╔═══════╦═══════╦════════╦════════════════════╗
║ x ║ y ║ radius ║ Inside/Outside ║
╠═══════╬═══════╬════════╬════════════════════╣
║ 3.50 ║ 3.50 ║ 5.50 ║ Inside The Circle ║
║ 20.50 ║ 20.50 ║ 10.50 ║ Outside the Circle ║
║ 30.50 ║ 30.50 ║ 20.50 ║ Outside the Circle ║
╚═══════╩═══════╩════════╩════════════════════╝
As question was closed, could not add another answer, so I edited this to include solution using Sql Server Geometry types... [Uses same data points as above, plus one to demo exactly on the circle]
Declare @t TABLE
(x NUMERIC(10,2), y NUMERIC(10,2),
radius NUMERIC(10,2))
Insert @t
Values (3.5,3.5, 5.5),(20.5,20.5, 10.5),
(30.5,30.5, 20.5), (-5.5, 5.5, 11.0)
-- --------------------------
Declare @pX float = 5.5
Declare @pY float = 5.5
Declare @c geometry;
Declare @p geometry;
Select x, y, radius,
(geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0))
From @T
Where (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0)) > radius
qid & accept id:
(20954662, 20954835)
query:
Merge queries into 1 for sorting
soup:
You can do this with nested subqueries:
\nselect u.user_id, count(*) as numusers,\n (SELECT COUNT(user_id), FROM visitors v WHERE v.user_id = u.user_id) as NumVisitors,\n (SELECT SUM(amount) FROM visitors v WHERE v.user_id = u.user_id) as VisitorAmount,\n (SELECT COUNT(user_id) FROM sales s WHERE s.user_id = u.user_id) as NumSales\nfrom users u\ngroup by u.user_id;\n
\nYou can also do this by joining pre-aggregated queries:
\nselect u.user_id, v.NumVisitors, v.VisitorAmount, s.NumSales\nfrom (select u.user_id, count(*) as NumUsers\n from users u\n group by u.user_id\n ) u left outer join\n (select v.user_id, count(user_id) as NumVisitors, sum(amount) as VisitorAmount\n from visitors v\n group by v.user_id\n ) v\n on u.user_id = v.visitor_id left outer join\n (select s.user_id, count(user_id) as NumSales\n from sales s\n group by s.user_id\n ) s\n on s.user_id = u.user_id;\n
\n
soup wrap:
You can do this with nested subqueries:
select u.user_id, count(*) as numusers,
(SELECT COUNT(user_id), FROM visitors v WHERE v.user_id = u.user_id) as NumVisitors,
(SELECT SUM(amount) FROM visitors v WHERE v.user_id = u.user_id) as VisitorAmount,
(SELECT COUNT(user_id) FROM sales s WHERE s.user_id = u.user_id) as NumSales
from users u
group by u.user_id;
You can also do this by joining pre-aggregated queries:
select u.user_id, v.NumVisitors, v.VisitorAmount, s.NumSales
from (select u.user_id, count(*) as NumUsers
from users u
group by u.user_id
) u left outer join
(select v.user_id, count(user_id) as NumVisitors, sum(amount) as VisitorAmount
from visitors v
group by v.user_id
) v
on u.user_id = v.visitor_id left outer join
(select s.user_id, count(user_id) as NumSales
from sales s
group by s.user_id
) s
on s.user_id = u.user_id;
qid & accept id:
(21075815, 21076138)
query:
Interactive Query
soup:
Assuming you are coding an app wherein the user supplies the inputs, there are multiple ways to create a query that uses those values as variables - one way is as follows:
\nSET @t1=1, @t2=2, @t3:=4;\nSELECT @t1, @t2;\n
\nSource: http://dev.mysql.com/doc/refman/5.5/en/user-variables.html
\nSo for your particular case, replacing all the instances of X with the MySQL syntax for a user-defined variable @X, it would look something like this:
\nSET @X = user_input;\nSELECT @X AS DISTANCE,\nSUM(ABS(LOCX) <= @X AND ABS(LOCY) <= @X) AS QUANTITY,\nCOUNT(*) AS TOTAL,\nCONCAT(AVG(ABS(LOCX) <= @X AND ABS(LOCY) <= @X)*100, '%') AS PERCENTAGE\nFROM CUSTOMER;\n
\n
soup wrap:
Assuming you are coding an app wherein the user supplies the inputs, there are multiple ways to create a query that uses those values as variables - one way is as follows:
SET @t1=1, @t2=2, @t3:=4;
SELECT @t1, @t2;
Source: http://dev.mysql.com/doc/refman/5.5/en/user-variables.html
So for your particular case, replacing all the instances of X with the MySQL syntax for a user-defined variable @X, it would look something like this:
SET @X = user_input;
SELECT @X AS DISTANCE,
SUM(ABS(LOCX) <= @X AND ABS(LOCY) <= @X) AS QUANTITY,
COUNT(*) AS TOTAL,
CONCAT(AVG(ABS(LOCX) <= @X AND ABS(LOCY) <= @X)*100, '%') AS PERCENTAGE
FROM CUSTOMER;
qid & accept id:
(21136618, 21136726)
query:
SQLite create table from table
soup:
with
\nSELECT sql FROM sqlite_master WHERE type='table' AND name='mytable' \n
\nyou can get the the structure. This you can modify and create your new table. And finally you can
\nINSERT INTO 'MyTableCopy' (*) SELECT * FROM 'mytable'\n
\n
soup wrap:
with
SELECT sql FROM sqlite_master WHERE type='table' AND name='mytable'
you can get the the structure. This you can modify and create your new table. And finally you can
INSERT INTO 'MyTableCopy' (*) SELECT * FROM 'mytable'
qid & accept id:
(21167225, 21167787)
query:
Select from table during update
soup:
This is one of the times you need to denormalise. Create a table
\ncreate table PreProcessedTotal (\n JaccardTotal decimal(18, 4) not null\n)\n
\n(substitute the appropriate data type). You need to add three triggers to table PreProcessed:
\n\n- An Insert trigger to add the value of Jaccard in the new row
\n- An Update, to add the Inserted value and substract the DELETED
\n- A Delete trigger to subtract the deleted value
\n
\nYou can then use:
\nselect Jaccard / JaccardTotal\nfrom Preprocessed with (nolock)\ncross join PreProcessedTotal with (nolock)\n
\nThe with (nolock) may not be needed. You'll also need to populate the PreProcessedTotal table with the current total when you put it live.
\n
soup wrap:
This is one of the times you need to denormalise. Create a table
create table PreProcessedTotal (
JaccardTotal decimal(18, 4) not null
)
(substitute the appropriate data type). You need to add three triggers to table PreProcessed:
- An Insert trigger to add the value of Jaccard in the new row
- An Update, to add the Inserted value and substract the DELETED
- A Delete trigger to subtract the deleted value
You can then use:
select Jaccard / JaccardTotal
from Preprocessed with (nolock)
cross join PreProcessedTotal with (nolock)
The with (nolock) may not be needed. You'll also need to populate the PreProcessedTotal table with the current total when you put it live.
qid & accept id:
(21234177, 21234992)
query:
Find first rows of change in historical table
soup:
CREATE TABLE T1 (A decimal(8,0), B int, C decimal(8,0))\nINSERT INTO T1 (A, B, C) VALUES (123, 0, 20130101),\n(123, 0, 20130102),(123, 1, 20130103),\n(123, 1, 20130104),(123, 0, 20130105),\n(123, 2, 20130106),(123, 2, 20130107),\n(123, 2, 20130108),(123, 0, 20130109),\n(123, 3, 20130110),(123, 3, 20130111),\n(123, 3, 20130112),(123, 3, 20130113)\n\n\n;with x as\n(\n select t1.A, t1.B, t1.C, \n row_number() over (partition by a order by c) rn \n from T1\n)\nselect x1.A, x1.B, x1.C \nfrom x x1\nleft join x x2\non x1.rn = x2.rn +1 and x1.A = x2.A\nwhere x2.A is null\nor x1.B <> x2.B\n
\nResult:
\nA B C\n123 0 20130101\n123 1 20130103\n123 0 20130105\n123 2 20130106\n123 0 20130109\n123 3 20130110\n
\n
soup wrap:
CREATE TABLE T1 (A decimal(8,0), B int, C decimal(8,0))
INSERT INTO T1 (A, B, C) VALUES (123, 0, 20130101),
(123, 0, 20130102),(123, 1, 20130103),
(123, 1, 20130104),(123, 0, 20130105),
(123, 2, 20130106),(123, 2, 20130107),
(123, 2, 20130108),(123, 0, 20130109),
(123, 3, 20130110),(123, 3, 20130111),
(123, 3, 20130112),(123, 3, 20130113)
;with x as
(
select t1.A, t1.B, t1.C,
row_number() over (partition by a order by c) rn
from T1
)
select x1.A, x1.B, x1.C
from x x1
left join x x2
on x1.rn = x2.rn +1 and x1.A = x2.A
where x2.A is null
or x1.B <> x2.B
Result:
A B C
123 0 20130101
123 1 20130103
123 0 20130105
123 2 20130106
123 0 20130109
123 3 20130110
qid & accept id:
(21250631, 21257149)
query:
SQL Server - PIVOT - two columns into rows
soup:
There are a few different ways that you can get the result that you want. Similar to @Sheela K R's answer you can use an aggregate function with a CASE expression but it can be written in a more concise way:
\nselect \n max(case when rowid = 1 then first end) First1,\n max(case when rowid = 1 then last end) Last1,\n max(case when rowid = 2 then first end) First2,\n max(case when rowid = 2 then last end) Last2,\n max(case when rowid = 3 then first end) First3,\n max(case when rowid = 3 then last end) Last3,\n max(case when rowid = 4 then first end) First4,\n max(case when rowid = 4 then last end) Last4,\n max(case when rowid = 5 then first end) First5,\n max(case when rowid = 5 then last end) Last5\nfrom yourtable;\n
\nSee SQL Fiddle with Demo.
\nThis could also be written using the PIVOT function, however since you want to pivot multiple columns then you would first want to look at unpivoting your First and Last columns.
\nThe unpivot process will convert your multiple columns into multiple rows of data. You did not specify what version of SQL Server you are using but you can use a SELECT with UNION ALL with CROSS APPLY or even the UNPIVOT function to perform the first conversion:
\nselect col = col + cast(rowid as varchar(10)), value\nfrom yourtable\ncross apply \n(\n select 'First', First union all\n select 'Last', Last\n) c (col, value)\n
\nSee SQL Fiddle with Demo. This converts your data into the format:
\n| COL | VALUE |\n|--------|-------------|\n| First1 | RandomName1 |\n| Last1 | RandomLast1 |\n| First2 | RandomName2 |\n| Last2 | RandomLast2 |\n
\nOnce the data is in multiple rows, then you can easily apply the PIVOT function:
\nselect First1, Last1, \n First2, Last2,\n First3, Last3, \n First4, Last4, \n First5, Last5\nfrom\n(\n select col = col + cast(rowid as varchar(10)), value\n from yourtable\n cross apply \n (\n select 'First', First union all\n select 'Last', Last\n ) c (col, value)\n) d\npivot\n(\n max(value)\n for col in (First1, Last1, First2, Last2,\n First3, Last3, First4, Last4, First5, Last5)\n) piv;\n
\n\nBoth give a result of:
\n| FIRST1 | LAST1 | FIRST2 | LAST2 | FIRST3 | LAST3 | FIRST4 | LAST4 | FIRST5 | LAST5 |\n|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|\n| RandomName1 | RandomLast1 | RandomName2 | RandomLast2 | RandomName3 | RandomLast3 | RandomName4 | RandomLast4 | RandomName5 | RandomLast5 |\n
\n
soup wrap:
There are a few different ways that you can get the result that you want. Similar to @Sheela K R's answer you can use an aggregate function with a CASE expression but it can be written in a more concise way:
select
max(case when rowid = 1 then first end) First1,
max(case when rowid = 1 then last end) Last1,
max(case when rowid = 2 then first end) First2,
max(case when rowid = 2 then last end) Last2,
max(case when rowid = 3 then first end) First3,
max(case when rowid = 3 then last end) Last3,
max(case when rowid = 4 then first end) First4,
max(case when rowid = 4 then last end) Last4,
max(case when rowid = 5 then first end) First5,
max(case when rowid = 5 then last end) Last5
from yourtable;
See SQL Fiddle with Demo.
This could also be written using the PIVOT function, however since you want to pivot multiple columns then you would first want to look at unpivoting your First and Last columns.
The unpivot process will convert your multiple columns into multiple rows of data. You did not specify what version of SQL Server you are using but you can use a SELECT with UNION ALL with CROSS APPLY or even the UNPIVOT function to perform the first conversion:
select col = col + cast(rowid as varchar(10)), value
from yourtable
cross apply
(
select 'First', First union all
select 'Last', Last
) c (col, value)
See SQL Fiddle with Demo. This converts your data into the format:
| COL | VALUE |
|--------|-------------|
| First1 | RandomName1 |
| Last1 | RandomLast1 |
| First2 | RandomName2 |
| Last2 | RandomLast2 |
Once the data is in multiple rows, then you can easily apply the PIVOT function:
select First1, Last1,
First2, Last2,
First3, Last3,
First4, Last4,
First5, Last5
from
(
select col = col + cast(rowid as varchar(10)), value
from yourtable
cross apply
(
select 'First', First union all
select 'Last', Last
) c (col, value)
) d
pivot
(
max(value)
for col in (First1, Last1, First2, Last2,
First3, Last3, First4, Last4, First5, Last5)
) piv;
Both give a result of:
| FIRST1 | LAST1 | FIRST2 | LAST2 | FIRST3 | LAST3 | FIRST4 | LAST4 | FIRST5 | LAST5 |
|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| RandomName1 | RandomLast1 | RandomName2 | RandomLast2 | RandomName3 | RandomLast3 | RandomName4 | RandomLast4 | RandomName5 | RandomLast5 |
qid & accept id:
(21251963, 21252025)
query:
Select Single and Duplicate Row and Return Multiple Columns
soup:
Could it be as simple as:
\nSELECT DISTINCT Code, Stuff FROM MyTable\n
\nOr, just add stuff to the partition by clause:
\nPARTITION BY Code,Stuff ORDER BY Code\n
\n
soup wrap:
Could it be as simple as:
SELECT DISTINCT Code, Stuff FROM MyTable
Or, just add stuff to the partition by clause:
PARTITION BY Code,Stuff ORDER BY Code
qid & accept id:
(21259677, 25725716)
query:
How to store Word documents in SQL Server 2008?
soup:
finaly i got answer any file you can store into db:-
\n\nStep 1: Get a document informations as a binary (Convert all text into ascii binary format becouse if you have any functional operator it will broke your INSERT QUERY).
\nStep 2: Get Docment extensions for example (.docx, .pdf, .ppt) and include with your INSERT QUERY.
\n
\nif (file != null && file.ContentLength > 0)\n {\n string contentType = file.ContentType;\n\n byte[] fileData = new byte[file.InputStream.Length];\n file.InputStream.Read(fileData, 0, fileData.Length);\n\n string OriginalName = Path.GetFileName(file.FileName);\n string Username = User.Identity.Name;\n\n Models.File myFile = new Models.File(contentType, OriginalName, fileData, Username);\n myFile.Save();\n }\n
\n\nstep 3: On Retriving your documents you can use like this
\n
\n public ActionResult Download()\n {\n string Originalname = string.Empty;\n byte[] FileData = null;\n var requestedID = RouteData.Values["id"];\n if (requestedID.ToString() != null)\n {\n Guid id = new Guid(requestedID.ToString());\n DataSet ds = new DataSet();\n Models.UsersGroups dt = new Models.UsersGroups();\n ds = dt.GetItem(id);\n foreach (DataRow item in ds.Tables[0].Rows)\n {\n Originalname = item["OriginalName"].ToString();\n FileData = (byte[])item["FileData"];\n }\n Response.AppendHeader("Content-Disposition", "attachment;filename=\"" + Originalname + "\"");\n Response.BinaryWrite(FileData);\n }\n return File(FileData, "application/x-unknown");\n }\n
\n
soup wrap:
finaly i got answer any file you can store into db:-
Step 1: Get a document informations as a binary (Convert all text into ascii binary format becouse if you have any functional operator it will broke your INSERT QUERY).
Step 2: Get Docment extensions for example (.docx, .pdf, .ppt) and include with your INSERT QUERY.
if (file != null && file.ContentLength > 0)
{
string contentType = file.ContentType;
byte[] fileData = new byte[file.InputStream.Length];
file.InputStream.Read(fileData, 0, fileData.Length);
string OriginalName = Path.GetFileName(file.FileName);
string Username = User.Identity.Name;
Models.File myFile = new Models.File(contentType, OriginalName, fileData, Username);
myFile.Save();
}
step 3: On Retriving your documents you can use like this
public ActionResult Download()
{
string Originalname = string.Empty;
byte[] FileData = null;
var requestedID = RouteData.Values["id"];
if (requestedID.ToString() != null)
{
Guid id = new Guid(requestedID.ToString());
DataSet ds = new DataSet();
Models.UsersGroups dt = new Models.UsersGroups();
ds = dt.GetItem(id);
foreach (DataRow item in ds.Tables[0].Rows)
{
Originalname = item["OriginalName"].ToString();
FileData = (byte[])item["FileData"];
}
Response.AppendHeader("Content-Disposition", "attachment;filename=\"" + Originalname + "\"");
Response.BinaryWrite(FileData);
}
return File(FileData, "application/x-unknown");
}
qid & accept id:
(21270528, 21270597)
query:
How to add more than one foreign key?
soup:
\nHow can I connect Member Name to other table
\n
\nDon't - leave Member Name in the member table. There should not be any reason to have a Member Name field in the Member_Fees_Record table if you can join it back to Member through the ID:
\nMember (Member ID, Member_Name, Age, Address)\n\nMember_Fees_Record (Member ID, Fee)\n
\nExample query:
\nSELECT m.MemberId, f.Fee, m.Member_Name, m.Address, m.Age\nFROM Member m\nINNER JOIN Member_Fees_Record mf ON m.MemberID = f.MemberID\n
\n
soup wrap:
How can I connect Member Name to other table
Don't - leave Member Name in the member table. There should not be any reason to have a Member Name field in the Member_Fees_Record table if you can join it back to Member through the ID:
Member (Member ID, Member_Name, Age, Address)
Member_Fees_Record (Member ID, Fee)
Example query:
SELECT m.MemberId, f.Fee, m.Member_Name, m.Address, m.Age
FROM Member m
INNER JOIN Member_Fees_Record mf ON m.MemberID = f.MemberID
qid & accept id:
(21280605, 21280968)
query:
Update Multiple SQL Server Columns from Access 2010 Form
soup:
You can enumerate selected items in each ListBox and build the SQL. Something like this
\nsql = "UPDATE tableName SET ColumnToUpdate = '" & txtZ & "' "\nsql = sql & "WHERE Column1 IN (" & GetValuesFromList(listBoxX) & ") "\nsql = sql & "AND Column2 IN (" & GetValuesFromList(listBoxy) & ")"\n
\nAnd the function GetValuesFromList:
\nPrivate Function GetValuesFromList(ListBox lst) as String\nDim Items As String\nDim Item As Variant\n\n Items = ""\n For Each Item In lst.ItemsSelected\n Items = Items & lst.ItemData(Item) & ","\n Next\n GetValuesFromList = Left(Items, Len(Items) - 1)\nEnd Function\n
\nIf the selected values in the list boxes are string values, you should modify the function to concatenate the quotes.
\n
soup wrap:
You can enumerate selected items in each ListBox and build the SQL. Something like this
sql = "UPDATE tableName SET ColumnToUpdate = '" & txtZ & "' "
sql = sql & "WHERE Column1 IN (" & GetValuesFromList(listBoxX) & ") "
sql = sql & "AND Column2 IN (" & GetValuesFromList(listBoxy) & ")"
And the function GetValuesFromList:
Private Function GetValuesFromList(ListBox lst) as String
Dim Items As String
Dim Item As Variant
Items = ""
For Each Item In lst.ItemsSelected
Items = Items & lst.ItemData(Item) & ","
Next
GetValuesFromList = Left(Items, Len(Items) - 1)
End Function
If the selected values in the list boxes are string values, you should modify the function to concatenate the quotes.
qid & accept id:
(21281481, 21281590)
query:
making a new column with last 10 digits of an other colulmn
soup:
You can use Right function
\nMySQL RIGHT() extracts a specified number of characters from the right side of a string.
\nUPDATE user SET phone_last_ten = RIGHT(phone, 10) \n
\nOr
\nUPDATE user SET phone_last_ten = RIGHT(CONVERT(Phone, CHAR(50)), 10) \n
\nDEMO
\n
soup wrap:
You can use Right function
MySQL RIGHT() extracts a specified number of characters from the right side of a string.
UPDATE user SET phone_last_ten = RIGHT(phone, 10)
Or
UPDATE user SET phone_last_ten = RIGHT(CONVERT(Phone, CHAR(50)), 10)
DEMO
qid & accept id:
(21286642, 21286868)
query:
Return latest row ordered by ID while using group by
soup:
You can use the substring_index()/group_concat() trick:
\nselect a.title,\n substring_index(group_concat(status order by id desc), ',', 1) as laststatus\nfrom b join\n a\n on a.id - b.a_id\ngroup by a.title;\n
\nEDIT:
\nIf you just want the last record from b, you can do:
\nselect a.title, b.status\nfrom b join\n a\n on a.id - b.a_id\norder by b.id desc\nlimit 1;\n
\n
soup wrap:
You can use the substring_index()/group_concat() trick:
select a.title,
substring_index(group_concat(status order by id desc), ',', 1) as laststatus
from b join
a
on a.id - b.a_id
group by a.title;
EDIT:
If you just want the last record from b, you can do:
select a.title, b.status
from b join
a
on a.id - b.a_id
order by b.id desc
limit 1;
qid & accept id:
(21286804, 21287383)
query:
How to select only numbers from a text field
soup:
It is possible that this or a variation may suit:
\n SELECT t.Field1, Mid([Field1],InStr([field1],"(")+1,4) AS Stripped\n FROM TheTable As t\n
\nFor example:
\n UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,4);\n
\nEDIT re comment
\nIf the field ends u), that is, alpha bracket, you can say:
\n UPDATE TheTable AS t SET [field2] =\n Mid([Field1],InStr([field1],"(")+1,Len(Mid([Field1],InStr([field1],"(")))-3)\n
\n
soup wrap:
It is possible that this or a variation may suit:
SELECT t.Field1, Mid([Field1],InStr([field1],"(")+1,4) AS Stripped
FROM TheTable As t
For example:
UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,4);
EDIT re comment
If the field ends u), that is, alpha bracket, you can say:
UPDATE TheTable AS t SET [field2] =
Mid([Field1],InStr([field1],"(")+1,Len(Mid([Field1],InStr([field1],"(")))-3)
qid & accept id:
(21302307, 21842979)
query:
How to migrate from CodeIgniter database to Laravel database
soup:
Databases are pretty much the same in Laravel or Codeigniter, if your tables are good the way they are for you and they have a primary key named id (this also is not mandatory) you can just connect with Laravel in your database and it will work just fine.
\nFor your new tables, you can create new migrations and Laravel will not complaint about this.
\nWell, but if you really need to migrate to a whole new database, you can do the following:
\n1) rename the tables you need to migrate
\nphp artisan migrate:make\n
\n2) create all your migrations with your and migrate them:
\nphp artisan migrate\n
\n3) use your database server sql utility to copy data from one table to another, it will be way faster than creating everything in Laravel, believe me. Most databases will let you do things like:
\nINSERT INTO users (FirstName, LastName)\nSELECT FirstName, LastName\nFROM users_old\n
\nAnd in some you'll be able to do the same using two different databases and columns names
\nINSERT INTO NEWdatabasename.users (firstName+' '+Lastname, email)\nSELECT name, email\nFROM OLDdatabasename.\n
\nOr you can just export data to a CSV file and then create a method in your Laravel seeding class to load that data into your database, with a lot of data to import, you just have to remember to execute:
\nDB::disableQueryLog();\n
\nSo your PHP doesn't run out of memory.
\nSee? There are a lot of options, probably many more, so pick one and if you need help, shoot more questions.
\n
soup wrap:
Databases are pretty much the same in Laravel or Codeigniter, if your tables are good the way they are for you and they have a primary key named id (this also is not mandatory) you can just connect with Laravel in your database and it will work just fine.
For your new tables, you can create new migrations and Laravel will not complaint about this.
Well, but if you really need to migrate to a whole new database, you can do the following:
1) rename the tables you need to migrate
php artisan migrate:make
2) create all your migrations with your and migrate them:
php artisan migrate
3) use your database server sql utility to copy data from one table to another, it will be way faster than creating everything in Laravel, believe me. Most databases will let you do things like:
INSERT INTO users (FirstName, LastName)
SELECT FirstName, LastName
FROM users_old
And in some you'll be able to do the same using two different databases and columns names
INSERT INTO NEWdatabasename.users (firstName+' '+Lastname, email)
SELECT name, email
FROM OLDdatabasename.
Or you can just export data to a CSV file and then create a method in your Laravel seeding class to load that data into your database, with a lot of data to import, you just have to remember to execute:
DB::disableQueryLog();
So your PHP doesn't run out of memory.
See? There are a lot of options, probably many more, so pick one and if you need help, shoot more questions.
qid & accept id:
(21311393, 21313710)
query:
MS SQL - User Defined Function - Slope Intercept RSquare ; How to Group by Portfolio
soup:
Wow, this is a real cool example of how to use nested CTE's in a In Line Table Value Function. You want to use a ITVF since they are fast. See Wayne Sheffield’s blog article that attests to this fact.
\nI always start with a sample database/table if it is really complicated to make sure I give the user a correct solution.
\nLets create a database named [test] based on model.
\n--\n-- Create a simple db\n--\n\n-- use master\nuse master;\ngo\n\n-- delete existing databases\nIF EXISTS (SELECT name FROM sys.databases WHERE name = N'Test')\nDROP DATABASE Test\nGO\n\n-- simple db based on model\ncreate database Test;\ngo\n\n-- switch to new db\nuse [Test];\ngo\n
\nLets create a table type named [InputToLinearReg].
\n--\n-- Create table type to pass data\n--\n\n-- Delete the existing table type\nIF EXISTS (SELECT * FROM sys.systypes WHERE name = 'InputToLinearReg')\nDROP TYPE dbo.InputToLinearReg\nGO\n\n-- Create the table type\nCREATE TYPE InputToLinearReg AS TABLE\n(\nportfolio_cd char(1),\nmonth_num int,\ncollections_amt money\n);\ngo\n
\nOkay, here is the multi-layered SELECT statement that uses CTE's. The query analyzer treats this as a SQL statement which can be executed in parallel versus a regular function that can't. See the black box section of Wayne's article.
\n--\n-- Create in line table value function (fast)\n--\n\n-- Remove if it exists\nIF OBJECT_ID('CalculateLinearReg') > 0\nDROP FUNCTION CalculateLinearReg\nGO\n\n-- Create the function\nCREATE FUNCTION CalculateLinearReg\n( \n @ParmInTable AS dbo.InputToLinearReg READONLY \n) \nRETURNS TABLE \nAS\nRETURN\n(\n\n WITH cteRawData as\n (\n SELECT\n T.portfolio_cd,\n CAST(T.month_num as decimal(18, 6)) as x,\n LOG(CAST(T.collections_amt as decimal(18, 6))) as y\n FROM\n @ParmInTable as T\n ),\n\n cteAvgByPortfolio as\n (\n SELECT\n portfolio_cd,\n AVG(x) as xavg,\n AVG(y) as yavg\n FROM\n cteRawData \n GROUP BY \n portfolio_cd\n ),\n\n cteSlopeByPortfolio as\n (\n SELECT\n R.portfolio_cd,\n SUM((R.x - A.xavg) * (R.y - A.yavg)) / SUM(POWER(R.x - A.xavg, 2)) as slope\n FROM\n cteRawData as R \n INNER JOIN \n cteAvgByPortfolio A\n ON \n R.portfolio_cd = A.portfolio_cd\n GROUP BY \n R.portfolio_cd\n ),\n\n cteInterceptByPortfolio as\n (\n SELECT\n A.portfolio_cd,\n (A.yavg - (S.slope * A.xavg)) as intercept\n FROM\n cteAvgByPortfolio as A\n INNER JOIN \n cteSlopeByPortfolio S\n ON \n A.portfolio_cd = S.portfolio_cd\n\n )\n\n SELECT \n A.portfolio_cd,\n A.xavg,\n A.yavg,\n S.slope,\n I.intercept,\n 1 - (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) /\n (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) + \n SUM(POWER(((I.intercept + S.slope * R.x) - A.yavg), 2)))) as rsquared\n FROM\n cteRawData as R \n INNER JOIN \n cteAvgByPortfolio as A ON R.portfolio_cd = A.portfolio_cd\n INNER JOIN \n cteSlopeByPortfolio S ON A.portfolio_cd = S.portfolio_cd\n INNER JOIN \n cteInterceptByPortfolio I ON S.portfolio_cd = I.portfolio_cd\n GROUP BY \n A.portfolio_cd,\n A.xavg,\n A.yavg,\n S.slope,\n I.intercept\n);\n
\nLast but not least, setup a Table Variable and get the answers. Unlike you solution above, it groups by portfolio id.
\n-- Load data into variable\nDECLARE @InTable AS InputToLinearReg;\n\n-- insert data\ninsert into @InTable\nvalues\n('A', 1, 100.00),\n('A', 2, 90.00),\n('A', 3, 80.00),\n('A', 4, 70.00),\n('B', 1, 100.00),\n('B', 2, 90.00),\n('B', 3, 80.00);\n\n-- show data\nselect * from CalculateLinearReg(@InTable)\ngo\n
\nHere is a picture of the results using your data.
\n
\n
soup wrap:
Wow, this is a real cool example of how to use nested CTE's in a In Line Table Value Function. You want to use a ITVF since they are fast. See Wayne Sheffield’s blog article that attests to this fact.
I always start with a sample database/table if it is really complicated to make sure I give the user a correct solution.
Lets create a database named [test] based on model.
--
-- Create a simple db
--
-- use master
use master;
go
-- delete existing databases
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'Test')
DROP DATABASE Test
GO
-- simple db based on model
create database Test;
go
-- switch to new db
use [Test];
go
Lets create a table type named [InputToLinearReg].
--
-- Create table type to pass data
--
-- Delete the existing table type
IF EXISTS (SELECT * FROM sys.systypes WHERE name = 'InputToLinearReg')
DROP TYPE dbo.InputToLinearReg
GO
-- Create the table type
CREATE TYPE InputToLinearReg AS TABLE
(
portfolio_cd char(1),
month_num int,
collections_amt money
);
go
Okay, here is the multi-layered SELECT statement that uses CTE's. The query analyzer treats this as a SQL statement which can be executed in parallel versus a regular function that can't. See the black box section of Wayne's article.
--
-- Create in line table value function (fast)
--
-- Remove if it exists
IF OBJECT_ID('CalculateLinearReg') > 0
DROP FUNCTION CalculateLinearReg
GO
-- Create the function
CREATE FUNCTION CalculateLinearReg
(
@ParmInTable AS dbo.InputToLinearReg READONLY
)
RETURNS TABLE
AS
RETURN
(
WITH cteRawData as
(
SELECT
T.portfolio_cd,
CAST(T.month_num as decimal(18, 6)) as x,
LOG(CAST(T.collections_amt as decimal(18, 6))) as y
FROM
@ParmInTable as T
),
cteAvgByPortfolio as
(
SELECT
portfolio_cd,
AVG(x) as xavg,
AVG(y) as yavg
FROM
cteRawData
GROUP BY
portfolio_cd
),
cteSlopeByPortfolio as
(
SELECT
R.portfolio_cd,
SUM((R.x - A.xavg) * (R.y - A.yavg)) / SUM(POWER(R.x - A.xavg, 2)) as slope
FROM
cteRawData as R
INNER JOIN
cteAvgByPortfolio A
ON
R.portfolio_cd = A.portfolio_cd
GROUP BY
R.portfolio_cd
),
cteInterceptByPortfolio as
(
SELECT
A.portfolio_cd,
(A.yavg - (S.slope * A.xavg)) as intercept
FROM
cteAvgByPortfolio as A
INNER JOIN
cteSlopeByPortfolio S
ON
A.portfolio_cd = S.portfolio_cd
)
SELECT
A.portfolio_cd,
A.xavg,
A.yavg,
S.slope,
I.intercept,
1 - (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) /
(SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) +
SUM(POWER(((I.intercept + S.slope * R.x) - A.yavg), 2)))) as rsquared
FROM
cteRawData as R
INNER JOIN
cteAvgByPortfolio as A ON R.portfolio_cd = A.portfolio_cd
INNER JOIN
cteSlopeByPortfolio S ON A.portfolio_cd = S.portfolio_cd
INNER JOIN
cteInterceptByPortfolio I ON S.portfolio_cd = I.portfolio_cd
GROUP BY
A.portfolio_cd,
A.xavg,
A.yavg,
S.slope,
I.intercept
);
Last but not least, setup a Table Variable and get the answers. Unlike you solution above, it groups by portfolio id.
-- Load data into variable
DECLARE @InTable AS InputToLinearReg;
-- insert data
insert into @InTable
values
('A', 1, 100.00),
('A', 2, 90.00),
('A', 3, 80.00),
('A', 4, 70.00),
('B', 1, 100.00),
('B', 2, 90.00),
('B', 3, 80.00);
-- show data
select * from CalculateLinearReg(@InTable)
go
Here is a picture of the results using your data.

qid & accept id:
(21313983, 21314536)
query:
How to return only 1 (specific) instance of column value when multiple instances exist
soup:
I think this is the logic that you want to get the date:
\nselect itemcode,\n coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate\nfrom timtest tt\nwhere date >= date(now())\ngroup by itemcode;\n
\nThe expression coalesce(min(case when qty > 0 then date end), min(date)) seems to encapsulate your logic. The first part of the coalesce returns the first date when qty > 0. If none of these exist, then it finds the first date with 0. You don't state what to do when there is no record for today, but there is a record in the future for 0. This returns the first such record.
\nTo get the quantity, let's join back to this:
\nselect tt.*\nfrom timtest tt join\n (select itemcode,\n coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate\n from timtest tt\n where date >= date(now())\n group by itemcode\n ) id\n on tt.itemcode = id.itemcode and tt.date = id.thedate;\n
\nEDIT:
\nNo accounting for bad date formats. Here is a version for this situation:
\nselect tt.*\nfrom timtest tt join\n (select itemcode,\n coalesce(min(case when qty_available > 0 then thedate end), min(thedate)) as thedate\n from (select tt.*, str_to_date(date, '%m/%d/%Y') as thedate\n from timtest tt\n ) tt\n where thedate >= date(now())\n group by itemcode\n ) id\n on tt.itemcode = id.itemcode and str_to_date(tt.date, '%m/%d/%Y') = id.thedate;\n
\nAdvice for the future: store dates in the database as a date/datetime data time and not as strings. If you have store store them as strings, use the YYYY-MM-DD format, because you can use comparisons and order by.
\n
soup wrap:
I think this is the logic that you want to get the date:
select itemcode,
coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate
from timtest tt
where date >= date(now())
group by itemcode;
The expression coalesce(min(case when qty > 0 then date end), min(date)) seems to encapsulate your logic. The first part of the coalesce returns the first date when qty > 0. If none of these exist, then it finds the first date with 0. You don't state what to do when there is no record for today, but there is a record in the future for 0. This returns the first such record.
To get the quantity, let's join back to this:
select tt.*
from timtest tt join
(select itemcode,
coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate
from timtest tt
where date >= date(now())
group by itemcode
) id
on tt.itemcode = id.itemcode and tt.date = id.thedate;
EDIT:
No accounting for bad date formats. Here is a version for this situation:
select tt.*
from timtest tt join
(select itemcode,
coalesce(min(case when qty_available > 0 then thedate end), min(thedate)) as thedate
from (select tt.*, str_to_date(date, '%m/%d/%Y') as thedate
from timtest tt
) tt
where thedate >= date(now())
group by itemcode
) id
on tt.itemcode = id.itemcode and str_to_date(tt.date, '%m/%d/%Y') = id.thedate;
Advice for the future: store dates in the database as a date/datetime data time and not as strings. If you have store store them as strings, use the YYYY-MM-DD format, because you can use comparisons and order by.
qid & accept id:
(21352556, 21352901)
query:
Using unique records as table header
soup:
The generic SQL approach is to use conditional aggregation:
\nselect s.studentName,\n max(case when s.subjectName = 'subject1' then g.grade end) as Subject1,\n max(case when s.subjectName = 'subject2' then g.grade end) as Subject2,\n max(case when s.subjectName = 'subject3' then g.grade end) as Subject3\nfrom (students s join\n grades g\n on s.student_id = g.student_id\n ) join\n subjects su\n on g.subject_id = su.subject_id\ngroup by s.studentid, s.studentName;\n
\nSeveral databases also support the pivot syntax to do this.
\nEDIT:
\nThe Access query is:
\nselect s.studentName,\n max(iif(s.subjectName = 'subject1', grade, NULL)) as Subject1,\n max(iif(s.subjectName = 'subject2', grade, NULL)) as Subject2,\n max(iif(s.subjectName = 'subject3', grade, NULL)) as Subject3\nfrom students s inner join\n grades g\n on s.student_id = g.student_id inner join\n subjects su\n on g.subject_id = su.subject_id\ngroup by s.studentid, s.studentName;\n
\n
soup wrap:
The generic SQL approach is to use conditional aggregation:
select s.studentName,
max(case when s.subjectName = 'subject1' then g.grade end) as Subject1,
max(case when s.subjectName = 'subject2' then g.grade end) as Subject2,
max(case when s.subjectName = 'subject3' then g.grade end) as Subject3
from (students s join
grades g
on s.student_id = g.student_id
) join
subjects su
on g.subject_id = su.subject_id
group by s.studentid, s.studentName;
Several databases also support the pivot syntax to do this.
EDIT:
The Access query is:
select s.studentName,
max(iif(s.subjectName = 'subject1', grade, NULL)) as Subject1,
max(iif(s.subjectName = 'subject2', grade, NULL)) as Subject2,
max(iif(s.subjectName = 'subject3', grade, NULL)) as Subject3
from students s inner join
grades g
on s.student_id = g.student_id inner join
subjects su
on g.subject_id = su.subject_id
group by s.studentid, s.studentName;
qid & accept id:
(21367807, 21367977)
query:
How to select last published comment created by student?
soup:
Try one of following solutions:
\nSELECT src.Id, src.FirstName, src.LastName, src.Comment, src.InsertAt\nFROM \n(\n SELECT s.Id, s.FirstName, s.LastName, sc.Comment, sc.InsertAt,\n ROW_NUMBER() OVER(PARTITION BY sc.StudentId ORDER BY sc.InsertAt DESC) RowNum\n FROM dbo.Students s INNER JOIN dbo.StudentComments sc ON s.Id = sc.StudentId\n --WHERE sc.IsPublished = 1\n) src\nWHERE src.RowNum = 1; \n
\nor
\nSELECT s.Id, s.FirstName, s.LastName, lc.Comment, lc.InsertAt\nFROM dbo.Students s \nCROSS APPLY (\n SELECT TOP(1) sc.Comment, sc.InsertAt\n FROM dbo.StudentComments sc \n WHERE s.Id = sc.StudentId\n --AND sc.IsPublished = 1\n ORDER BY sc.InsertAt DESC\n) lc; -- Last comment\n
\n
soup wrap:
Try one of following solutions:
SELECT src.Id, src.FirstName, src.LastName, src.Comment, src.InsertAt
FROM
(
SELECT s.Id, s.FirstName, s.LastName, sc.Comment, sc.InsertAt,
ROW_NUMBER() OVER(PARTITION BY sc.StudentId ORDER BY sc.InsertAt DESC) RowNum
FROM dbo.Students s INNER JOIN dbo.StudentComments sc ON s.Id = sc.StudentId
--WHERE sc.IsPublished = 1
) src
WHERE src.RowNum = 1;
or
SELECT s.Id, s.FirstName, s.LastName, lc.Comment, lc.InsertAt
FROM dbo.Students s
CROSS APPLY (
SELECT TOP(1) sc.Comment, sc.InsertAt
FROM dbo.StudentComments sc
WHERE s.Id = sc.StudentId
--AND sc.IsPublished = 1
ORDER BY sc.InsertAt DESC
) lc; -- Last comment
qid & accept id:
(21384239, 21387895)
query:
SQL Query to show all available rooms under a property
soup:
it sounds like you try to build a Report and try to do the display in SQL instead of your web solution.
\nKeep the data and its presentation separate.
\nGet your datatable, and then loop through it with PHP, creating a table for every building.
\nOrdinarely, you would use recursion, but MySQL doesn't support it.
\nYou can use
\nORDER BY premise.name, premise.id, room.nr, room.id\n
\nMy guess is you need to group by room and property fields, using the max aggregate function for address and city fields, because a property (building) can have multiple addresses, one for each entrance...
\nSELECT \n premises.field_1\n ,premises.field_2\n ,premises.field_3\n\n ,room.field_1\n ,room.field_2\n ,room.field_3\n\n ,max(address.field1) as adr_f1\n ,max(address.field2) as adr_f2\n ,max(address.field3) as adr_f3 \nFROM Whatever\n\nJOIN WHATEVER\n\nWHERE (1=1) \nAND (whatever)\n\nGROUP BY \n\n premises.field_1\n ,premises.field_2\n ,premises.field_3\n\n ,room.field_1\n ,room.field_2\n ,room.field_3\n\nHAVING (WHATEVER)\n\nORDER BY premises.field_x, room.field_y\n
\n
soup wrap:
it sounds like you try to build a Report and try to do the display in SQL instead of your web solution.
Keep the data and its presentation separate.
Get your datatable, and then loop through it with PHP, creating a table for every building.
Ordinarely, you would use recursion, but MySQL doesn't support it.
You can use
ORDER BY premise.name, premise.id, room.nr, room.id
My guess is you need to group by room and property fields, using the max aggregate function for address and city fields, because a property (building) can have multiple addresses, one for each entrance...
SELECT
premises.field_1
,premises.field_2
,premises.field_3
,room.field_1
,room.field_2
,room.field_3
,max(address.field1) as adr_f1
,max(address.field2) as adr_f2
,max(address.field3) as adr_f3
FROM Whatever
JOIN WHATEVER
WHERE (1=1)
AND (whatever)
GROUP BY
premises.field_1
,premises.field_2
,premises.field_3
,room.field_1
,room.field_2
,room.field_3
HAVING (WHATEVER)
ORDER BY premises.field_x, room.field_y
qid & accept id:
(21389431, 21389784)
query:
how do I know the minimum date in a query?
soup:
Either:
\nselect min(stamp) from tbl\n
\nOr:
\nselect stamp from tbl order by stamp asc limit 1\n
\nThe first can also be used as a window function, if you need it on an entire set without grouping.
\nIf you need the date in the stamp, cast it:
\nselect min(stamp::date) from tbl\n
\nOr:
\nselect stamp::date from tbl order by stamp asc limit 1\n
\n
soup wrap:
Either:
select min(stamp) from tbl
Or:
select stamp from tbl order by stamp asc limit 1
The first can also be used as a window function, if you need it on an entire set without grouping.
If you need the date in the stamp, cast it:
select min(stamp::date) from tbl
Or:
select stamp::date from tbl order by stamp asc limit 1
qid & accept id:
(21400367, 21400496)
query:
Separate rows based on a column that has min value
soup:
You're almost there. Just remove the AttendanceTime from the group by.
\nSELECT tal.PersonNo, min(tal.AttendanceTime) \n FROM mqa.T_AttendanceLog tal\n GROUP BY tal.PersonNo;\n
\nIf you want the entire row (incase you have other columns) you can use something like this:
\nselect *\n from mqa.T_AttendanceLog a\n where (PersonNo, AttendanceTime) in(\n select b.PersonNo, min(b.AttendanceTime)\n from mqa.T_AttendanceLog b\n group by b.PersonNo);\n
\n
soup wrap:
You're almost there. Just remove the AttendanceTime from the group by.
SELECT tal.PersonNo, min(tal.AttendanceTime)
FROM mqa.T_AttendanceLog tal
GROUP BY tal.PersonNo;
If you want the entire row (incase you have other columns) you can use something like this:
select *
from mqa.T_AttendanceLog a
where (PersonNo, AttendanceTime) in(
select b.PersonNo, min(b.AttendanceTime)
from mqa.T_AttendanceLog b
group by b.PersonNo);
qid & accept id:
(21409033, 21409141)
query:
How to iterate through a table from last row to first?
soup:
Change MySQL statement to be
\nSELECT * FROM 'mytable' ORDER BY 'id' DESC\n
\nor reverse the array using PHPs reverse array function
\nreturn array_reverse($data);\n
\n
soup wrap:
Change MySQL statement to be
SELECT * FROM 'mytable' ORDER BY 'id' DESC
or reverse the array using PHPs reverse array function
return array_reverse($data);
qid & accept id:
(21424132, 21424242)
query:
Replace values in an sql query according to results of a nested query
soup:
You can use FIND_IN_SET()
\nSELECT *\n FROM request r JOIN locations l\n ON FIND_IN_SET(loc_id, locations) > 0\n WHERE loc_name = 'mordor'\n
\nHere is SQLFiddle demo
\nBut you better normalize your data by introducing a many-to-many table that may look like
\nCREATE TABLE request_location\n(\n request_id INT NOT NULL,\n loc_id INT NOT NULL,\n PRIMARY KEY (request_id, loc_id),\n FOREIGN KEY (request_id) REFERENCES request (request_id),\n FOREIGN KEY (loc_id) REFERENCES locations (loc_id)\n);\n
\nThis will pay off big time in a long run enabling you to maintain and query your data normally.
\nYour query then may look like
\nSELECT *\n FROM request_location rl JOIN request r \n ON rl.request_id = r.request_id JOIN locations l\n ON rl.loc_id = l.loc_id\n WHERE l.loc_name = 'mordor'\n
\nor even
\nSELECT rl.request_id\n FROM request_location rl JOIN locations l\n ON rl.loc_id = l.loc_id\n WHERE l.loc_name = 'mordor';\n
\nif you need to return only request_id
\nHere is SQLFiddle demo
\n
soup wrap:
You can use FIND_IN_SET()
SELECT *
FROM request r JOIN locations l
ON FIND_IN_SET(loc_id, locations) > 0
WHERE loc_name = 'mordor'
Here is SQLFiddle demo
But you better normalize your data by introducing a many-to-many table that may look like
CREATE TABLE request_location
(
request_id INT NOT NULL,
loc_id INT NOT NULL,
PRIMARY KEY (request_id, loc_id),
FOREIGN KEY (request_id) REFERENCES request (request_id),
FOREIGN KEY (loc_id) REFERENCES locations (loc_id)
);
This will pay off big time in a long run enabling you to maintain and query your data normally.
Your query then may look like
SELECT *
FROM request_location rl JOIN request r
ON rl.request_id = r.request_id JOIN locations l
ON rl.loc_id = l.loc_id
WHERE l.loc_name = 'mordor'
or even
SELECT rl.request_id
FROM request_location rl JOIN locations l
ON rl.loc_id = l.loc_id
WHERE l.loc_name = 'mordor';
if you need to return only request_id
Here is SQLFiddle demo
qid & accept id:
(21438881, 21438974)
query:
How to pass the returned value of SELECT statement to DELETE query in stored procedure?
soup:
Let's suppose you have the following schema
\nCREATE TABLE customers\n(\n customer_id INT, \n customer_email VARCHAR(17),\n PRIMARY KEY (customer_id)\n);\nCREATE TABLE child_table\n(\n child_id INT,\n customer_id INT, \n value INT,\n PRIMARY KEY (child_id),\n FOREIGN KEY (customer_id) REFERENCES customers (customer_id)\n);\n
\nNow to delete all child records knowing an email of the customer you can use multi-table delete syntax
\nCREATE PROCEDURE deleteCustomerData(IN emailAddr VARCHAR(50)) \n DELETE t\n FROM child_table t JOIN customers c \n ON t.customer_id = c.customer_id\n WHERE c.customer_email = emailAddr;\n
\nHere is SQLFiddle demo
\n
\n\n...but if i want to pass the returned value of SELECT stmt to DELETE...
\n
\nThat is exactly what you're doing in above mentioned example. But you can always rewrite it this way
\nDELETE t\n FROM child_table t JOIN \n(\n SELECT customer_id \n FROM customers JOIN ...\n WHERE customer_email = emailAddr\n AND ...\n) c\n ON t.customer_id = c.customer_id\n
\nor
\nDELETE \n FROM child_table \n WHERE customer_id IN \n(\n SELECT customer_id \n FROM customers JOIN ...\n WHERE customer_email = emailAddr\n AND ...\n) \n
\n
soup wrap:
Let's suppose you have the following schema
CREATE TABLE customers
(
customer_id INT,
customer_email VARCHAR(17),
PRIMARY KEY (customer_id)
);
CREATE TABLE child_table
(
child_id INT,
customer_id INT,
value INT,
PRIMARY KEY (child_id),
FOREIGN KEY (customer_id) REFERENCES customers (customer_id)
);
Now to delete all child records knowing an email of the customer you can use multi-table delete syntax
CREATE PROCEDURE deleteCustomerData(IN emailAddr VARCHAR(50))
DELETE t
FROM child_table t JOIN customers c
ON t.customer_id = c.customer_id
WHERE c.customer_email = emailAddr;
Here is SQLFiddle demo
...but if i want to pass the returned value of SELECT stmt to DELETE...
That is exactly what you're doing in above mentioned example. But you can always rewrite it this way
DELETE t
FROM child_table t JOIN
(
SELECT customer_id
FROM customers JOIN ...
WHERE customer_email = emailAddr
AND ...
) c
ON t.customer_id = c.customer_id
or
DELETE
FROM child_table
WHERE customer_id IN
(
SELECT customer_id
FROM customers JOIN ...
WHERE customer_email = emailAddr
AND ...
)
qid & accept id:
(21481598, 21481638)
query:
How to SELECT records from One table If Matching Record In Not Found In Other Table
soup:
Just add this to your WHERE clause:
\nAND DU.das_id_fk IS NULL\n
\nSay I have the following two tables:
\n\n+-------------------------+ +-------------------------+\n| Person | | Pet |\n+----------+--------------+ +-------------------------+\n| PersonID | INT(11) | | PetID | INT(11) |\n| Name | VARCHAR(255) | | PersonID | INT(11) |\n+----------+--------------+ | Name | VARCHAR(255) |\n +----------+--------------+\n
\nAnd my tables contain the following data:
\n\n+------------------------+ +---------------------------+\n| Person | | Pet |\n+----------+-------------+ +-------+----------+--------+\n| PersonID | Name | | PetID | PersonID | Name |\n+----------+-------------+ +-------+----------+--------+\n| 1 | Sean | | 5 | 1 | Lucy |\n| 2 | Javier | | 6 | 1 | Cooper |\n| 3 | tradebel123 | | 7 | 2 | Fluffy |\n+----------+-------------+ +-------+----------+--------+\n
\nNow, if I want a list of all Persons:
\nSELECT pr.PersonID, pr.Name\nFROM\n Person pr\n
\nIf I want a list of Persons that have pets (including their pet's names):
\nSELECT pr.PersonID, pr.Name, pt.Name AS PetName\nFROM\n Person pr\n INNER JOIN Pet pt ON pr.PersonID = pt.PersonID\n
\nIf I want a list of Persons that have no pets:
\nSELECT pr.PersonID, pr.`Name`\nFROM\n Person pr\n LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nWHERE\n pt.`PetID` IS NULL\n
\nIf I want a list of all Persons and their pets (even if they don't have pets):
\nSELECT\n pr.PersonID,\n pr.Name,\n COALESCE(pt.Name, '') AS PetName\nFROM\n Person pr\n LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\n
\nIf I want a list of Persons and a count of how many pets they have:
\nSELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets\nFROM\n Person pr\n LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nGROUP BY\n pr.PersonID, pr.Name\n
\nSame as above, but don't show Persons with 0 pets:
\nSELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets\nFROM\n Person pr\n LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nGROUP BY\n pr.PersonID, pr.Name\nHAVING COUNT(pt.PetID) > 0\n
\n
soup wrap:
Just add this to your WHERE clause:
AND DU.das_id_fk IS NULL
Say I have the following two tables:
+-------------------------+ +-------------------------+
| Person | | Pet |
+----------+--------------+ +-------------------------+
| PersonID | INT(11) | | PetID | INT(11) |
| Name | VARCHAR(255) | | PersonID | INT(11) |
+----------+--------------+ | Name | VARCHAR(255) |
+----------+--------------+
And my tables contain the following data:
+------------------------+ +---------------------------+
| Person | | Pet |
+----------+-------------+ +-------+----------+--------+
| PersonID | Name | | PetID | PersonID | Name |
+----------+-------------+ +-------+----------+--------+
| 1 | Sean | | 5 | 1 | Lucy |
| 2 | Javier | | 6 | 1 | Cooper |
| 3 | tradebel123 | | 7 | 2 | Fluffy |
+----------+-------------+ +-------+----------+--------+
Now, if I want a list of all Persons:
SELECT pr.PersonID, pr.Name
FROM
Person pr
If I want a list of Persons that have pets (including their pet's names):
SELECT pr.PersonID, pr.Name, pt.Name AS PetName
FROM
Person pr
INNER JOIN Pet pt ON pr.PersonID = pt.PersonID
If I want a list of Persons that have no pets:
SELECT pr.PersonID, pr.`Name`
FROM
Person pr
LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
WHERE
pt.`PetID` IS NULL
If I want a list of all Persons and their pets (even if they don't have pets):
SELECT
pr.PersonID,
pr.Name,
COALESCE(pt.Name, '') AS PetName
FROM
Person pr
LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
If I want a list of Persons and a count of how many pets they have:
SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets
FROM
Person pr
LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
GROUP BY
pr.PersonID, pr.Name
Same as above, but don't show Persons with 0 pets:
SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets
FROM
Person pr
LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
GROUP BY
pr.PersonID, pr.Name
HAVING COUNT(pt.PetID) > 0
qid & accept id:
(21528762, 21529300)
query:
(Possibly) Complex Join across four tables using aggregates
soup:
If you only want the latest row you can turn each of your subqueries into an APPLY:
\nSELECT Account.Name, \n AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, \n AnnAccs.LastPeriod AS AnnAccsLastPeriod,\n CorpTax.PeriodEnd AS CorpTaxPeriodEnd, \n CorpTax.LastPeriod AS CorpTaxLastPeriod,\n SelfAss.PeriodEnd AS SAPeriodEnd, \n SelfAss.LastPeriod AS SALastPeriod\nFROM dbo.Account \n OUTER APPLY\n ( SELECT TOP 1\n ca.new_PeriodEnd AS PeriodEnd, \n ca.new_LastPeriod AS LastPeriod, \n new_CorporationTaxActivityId AS AccId \n FROM new_corporationtaxactivity ca\n WHERE ca.AccId = Account.AccountId \n ORDER BY ca.new_PeriodEnd DESC\n ) AS CorpTax \n OUTER APPLY\n ( SELECT TOP 1 aa.new_PeriodEnd AS PeriodEnd, \n aa.new_LastPeriod AS LastPeriod, \n aa.new_AnnualAccountsActivityId AS AccId \n FROM new_annualaccountsactivity aa\n WHERE aa.new_AnnualAccountsActivityId = Account.AccountId \n ORDER BY aa.new_PeriodEnd DESC\n ) AS AnnAccs \n OUTER APPLY\n ( SELECT TOP 1 sa.new_PeriodEnd AS PeriodEnd, \n sa.new_LastPeriod AS LastPeriod, \n sa.new_SelfAssessmentActivityId AS AccId \n FROM new_selfassessmentactivity sa\n WHERE sa.new_SelfAssessmentActivityId = Account.AccountId\n ORDER BY sa.new_PeriodEnd DESC\n ) As SelfAss \nWHERE (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')\nAND (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1')\n
\nOr you can add ROW_NUMBER() to each of your subqueries and limit it to the top result (RowNum = 1):
\nSELECT Account.Name, \n AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, \n AnnAccs.LastPeriod AS AnnAccsLastPeriod,\n CorpTax.PeriodEnd AS CorpTaxPeriodEnd, \n CorpTax.LastPeriod AS CorpTaxLastPeriod,\n SelfAss.PeriodEnd AS SAPeriodEnd, \n SelfAss.LastPeriod AS SALastPeriod\nFROM dbo.Account \n LEFT JOIN \n ( SELECT ca.new_PeriodEnd AS PeriodEnd, \n ca.new_LastPeriod AS LastPeriod, \n ca.new_CorporationTaxActivityId AS AccId,\n ROW_NUMBER() OVER(PARTITION BY ca.new_CorporationTaxActivityId ORDER BY ca.new_PeriodEnd DESC) AS RowNum\n FROM new_corporationtaxactivity ca\n ) AS CorpTax \n ON CorpTax.AccId = Account.AccountId \n AND CorpTax.RowNum = 1\n LEFT JOIN \n ( SELECT aa.new_PeriodEnd AS PeriodEnd, \n aa.new_LastPeriod AS LastPeriod, \n aa.new_AnnualAccountsActivityId AS AccId,\n ROW_NUMBER() OVER(PARTITION BY aa.new_AnnualAccountsActivityId ORDER BY aa.new_PeriodEnd DESC) AS RowNum\n FROM new_annualaccountsactivity aa\n ) AS AnnAccs \n ON AnnAccs.AccId = Account.AccountId\n AND AnnAccs.RowNum = 1\n LEFT JOIN \n ( SELECT sa.new_PeriodEnd AS PeriodEnd, \n sa.new_LastPeriod AS LastPeriod, \n sa.new_SelfAssessmentActivityId AS AccId,\n ROW_NUMBER() OVER(PARTITION BY sa.new_SelfAssessmentActivityId ORDER BY sa.new_PeriodEnd DESC) AS RowNum\n FROM new_selfassessmentactivity sa\n ) As SelfAss \n ON SelfAss.AccId = Account.AccountId\n AND SelfAss.RowNum = 1\nWHERE (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')\nAND (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1');\n
\n
soup wrap:
If you only want the latest row you can turn each of your subqueries into an APPLY:
SELECT Account.Name,
AnnAccs.PeriodEnd AS AnnAccsPeriodEnd,
AnnAccs.LastPeriod AS AnnAccsLastPeriod,
CorpTax.PeriodEnd AS CorpTaxPeriodEnd,
CorpTax.LastPeriod AS CorpTaxLastPeriod,
SelfAss.PeriodEnd AS SAPeriodEnd,
SelfAss.LastPeriod AS SALastPeriod
FROM dbo.Account
OUTER APPLY
( SELECT TOP 1
ca.new_PeriodEnd AS PeriodEnd,
ca.new_LastPeriod AS LastPeriod,
new_CorporationTaxActivityId AS AccId
FROM new_corporationtaxactivity ca
WHERE ca.AccId = Account.AccountId
ORDER BY ca.new_PeriodEnd DESC
) AS CorpTax
OUTER APPLY
( SELECT TOP 1 aa.new_PeriodEnd AS PeriodEnd,
aa.new_LastPeriod AS LastPeriod,
aa.new_AnnualAccountsActivityId AS AccId
FROM new_annualaccountsactivity aa
WHERE aa.new_AnnualAccountsActivityId = Account.AccountId
ORDER BY aa.new_PeriodEnd DESC
) AS AnnAccs
OUTER APPLY
( SELECT TOP 1 sa.new_PeriodEnd AS PeriodEnd,
sa.new_LastPeriod AS LastPeriod,
sa.new_SelfAssessmentActivityId AS AccId
FROM new_selfassessmentactivity sa
WHERE sa.new_SelfAssessmentActivityId = Account.AccountId
ORDER BY sa.new_PeriodEnd DESC
) As SelfAss
WHERE (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')
AND (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1')
Or you can add ROW_NUMBER() to each of your subqueries and limit it to the top result (RowNum = 1):
SELECT Account.Name,
AnnAccs.PeriodEnd AS AnnAccsPeriodEnd,
AnnAccs.LastPeriod AS AnnAccsLastPeriod,
CorpTax.PeriodEnd AS CorpTaxPeriodEnd,
CorpTax.LastPeriod AS CorpTaxLastPeriod,
SelfAss.PeriodEnd AS SAPeriodEnd,
SelfAss.LastPeriod AS SALastPeriod
FROM dbo.Account
LEFT JOIN
( SELECT ca.new_PeriodEnd AS PeriodEnd,
ca.new_LastPeriod AS LastPeriod,
ca.new_CorporationTaxActivityId AS AccId,
ROW_NUMBER() OVER(PARTITION BY ca.new_CorporationTaxActivityId ORDER BY ca.new_PeriodEnd DESC) AS RowNum
FROM new_corporationtaxactivity ca
) AS CorpTax
ON CorpTax.AccId = Account.AccountId
AND CorpTax.RowNum = 1
LEFT JOIN
( SELECT aa.new_PeriodEnd AS PeriodEnd,
aa.new_LastPeriod AS LastPeriod,
aa.new_AnnualAccountsActivityId AS AccId,
ROW_NUMBER() OVER(PARTITION BY aa.new_AnnualAccountsActivityId ORDER BY aa.new_PeriodEnd DESC) AS RowNum
FROM new_annualaccountsactivity aa
) AS AnnAccs
ON AnnAccs.AccId = Account.AccountId
AND AnnAccs.RowNum = 1
LEFT JOIN
( SELECT sa.new_PeriodEnd AS PeriodEnd,
sa.new_LastPeriod AS LastPeriod,
sa.new_SelfAssessmentActivityId AS AccId,
ROW_NUMBER() OVER(PARTITION BY sa.new_SelfAssessmentActivityId ORDER BY sa.new_PeriodEnd DESC) AS RowNum
FROM new_selfassessmentactivity sa
) As SelfAss
ON SelfAss.AccId = Account.AccountId
AND SelfAss.RowNum = 1
WHERE (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')
AND (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1');
qid & accept id:
(21532604, 21537534)
query:
SUM of DATEDIFF in minutes for each 2 rows
soup:
Using @DaveZych sample data I have managed to calculated the same results as him, using the SQL statement below:
\n;WITH DataSource ([StartOrEnd], [badge_no], [punch_timestamp]) AS\n(\n SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +\n ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) % 2\n ,[badge_no]\n ,[punch_timestamp]\n FROM #Time\n),\nTimesPerBadge_No ([badge_no], [StartOrEnd], [Minutes]) AS\n(\n SELECT [badge_no]\n ,[StartOrEnd] \n ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))\n FROM DataSource\n GROUP BY [badge_no]\n ,[StartOrEnd] \n)\nSELECT [badge_no]\n ,SUM([Minutes])\nFROM TimesPerBadge_No\nGROUP BY [badge_no]\n
\n
\nHere can see the values of each CTE:
\nFirst, we ned to group each start and end date:
\n SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +\n ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) % 2\n ,[badge_no]\n ,[punch_timestamp]\n FROM #Time\n
\n
\nNow, we can calculate the minutes difference in each group:
\nSELECT [badge_no]\n ,[StartOrEnd] \n ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))\nFROM DataSource\nGROUP BY [badge_no]\n ,[StartOrEnd] \n
\n
\nand finally sumarize the minutes for each badge_no:
\nSELECT [badge_no]\n ,SUM([Minutes])\nFROM TimesPerBadge_No\nGROUP BY [badge_no]\n
\n
\n
soup wrap:
Using @DaveZych sample data I have managed to calculated the same results as him, using the SQL statement below:
;WITH DataSource ([StartOrEnd], [badge_no], [punch_timestamp]) AS
(
SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +
ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) % 2
,[badge_no]
,[punch_timestamp]
FROM #Time
),
TimesPerBadge_No ([badge_no], [StartOrEnd], [Minutes]) AS
(
SELECT [badge_no]
,[StartOrEnd]
,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))
FROM DataSource
GROUP BY [badge_no]
,[StartOrEnd]
)
SELECT [badge_no]
,SUM([Minutes])
FROM TimesPerBadge_No
GROUP BY [badge_no]
Here can see the values of each CTE:
First, we ned to group each start and end date:
SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +
ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) % 2
,[badge_no]
,[punch_timestamp]
FROM #Time

Now, we can calculate the minutes difference in each group:
SELECT [badge_no]
,[StartOrEnd]
,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))
FROM DataSource
GROUP BY [badge_no]
,[StartOrEnd]

and finally sumarize the minutes for each badge_no:
SELECT [badge_no]
,SUM([Minutes])
FROM TimesPerBadge_No
GROUP BY [badge_no]

qid & accept id:
(21535167, 21535283)
query:
SQL Copy only data from table1 where it doesnt exist in table 2?
soup:
Assuming SQL Server given the SELECT INTO in your question:
\nUsing your sample query to populate a new table with only records from Table1 where the item value wasn't in Table2:
\nSELECT a.Item \nINTO new_table2\nFROM table1 a\nLEFT JOIN Table2 b\n ON a.item = b.item\nWHERE b.item IS NULL\n
\nIf you didn't want a new table and just want to add to Table2 the records from Table1 that aren't already there:
\nINSERT INTO Table2 (Item) \nSELECT a.Item\nFROM table1 a\nLEFT JOIN Table2 b\n ON a.item = b.item\nWHERE b.item IS NULL\n
\n
soup wrap:
Assuming SQL Server given the SELECT INTO in your question:
Using your sample query to populate a new table with only records from Table1 where the item value wasn't in Table2:
SELECT a.Item
INTO new_table2
FROM table1 a
LEFT JOIN Table2 b
ON a.item = b.item
WHERE b.item IS NULL
If you didn't want a new table and just want to add to Table2 the records from Table1 that aren't already there:
INSERT INTO Table2 (Item)
SELECT a.Item
FROM table1 a
LEFT JOIN Table2 b
ON a.item = b.item
WHERE b.item IS NULL
qid & accept id:
(21540219, 21554872)
query:
Finding Outliers In SQL
soup:
Sometimes simple is best- No need for an intro to statistics yet. I would recommend starting with simple grouping. Within that function you can Average, get the minimum, the Maximum and other useful bits of data. Here are a couple of examples to get you started:
\n SELECT Table1.State, Table1.Yr, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice, Avg(Table1.Price) AS AvgOfPrice\nFROM Table1\nGROUP BY Table1.State, Table1.Yr;\n
\nOr (in case you want month data included)
\n SELECT Table1.State, Table1.Yr, Month([Dt]) AS Mnth, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice\nFROM Table1\nGROUP BY Table1.State, Table1.Yr, Month([Dt]);\n
\nObviously you'll need to modify the table and field names (Just so you know though- 'Year' and 'Date' are both reserved words and best not used for field names.)
\n
soup wrap:
Sometimes simple is best- No need for an intro to statistics yet. I would recommend starting with simple grouping. Within that function you can Average, get the minimum, the Maximum and other useful bits of data. Here are a couple of examples to get you started:
SELECT Table1.State, Table1.Yr, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice, Avg(Table1.Price) AS AvgOfPrice
FROM Table1
GROUP BY Table1.State, Table1.Yr;
Or (in case you want month data included)
SELECT Table1.State, Table1.Yr, Month([Dt]) AS Mnth, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice
FROM Table1
GROUP BY Table1.State, Table1.Yr, Month([Dt]);
Obviously you'll need to modify the table and field names (Just so you know though- 'Year' and 'Date' are both reserved words and best not used for field names.)
qid & accept id:
(21546809, 21549472)
query:
Split text value insert another cell
soup:
Create this function:
\ncreate function f_parca\n(\n @name varchar(100)\n) returns varchar(max)\nas\nbegin\ndeclare @rv varchar(max) = ''\n\nif @name is not null\nselect top (len(@name)) @rv += ','+ left(@name, number + 1) \nfrom master..spt_values v\nwhere type = 'p'\n\nreturn stuff(@rv, 1,1,'')\nend\n
\nTesting the function
\nselect dbo.f_parca('TClausen')\n
\nResult:
\nT,TC,TCl,TCla,TClau,TClaus,TClause,TClausen\n
\nUpdate your table like this:
\nUPDATE export1\nSET PARCA = dbo.f_parca(name)\n
\n
soup wrap:
Create this function:
create function f_parca
(
@name varchar(100)
) returns varchar(max)
as
begin
declare @rv varchar(max) = ''
if @name is not null
select top (len(@name)) @rv += ','+ left(@name, number + 1)
from master..spt_values v
where type = 'p'
return stuff(@rv, 1,1,'')
end
Testing the function
select dbo.f_parca('TClausen')
Result:
T,TC,TCl,TCla,TClau,TClaus,TClause,TClausen
Update your table like this:
UPDATE export1
SET PARCA = dbo.f_parca(name)
qid & accept id:
(21613270, 21616500)
query:
Returning only the most recent values of a query
soup:
Of course. You just need a sub-query to identify the most recent record for each agent. Something like (untested):
\nselect a.eventdatetime\n ,b.resourcename\n ,b.extension\n ,a.eventtype \n from agentstatedetail as a\n ,resource as b\n ,team as c\n ,(SELECT agentid, MAX(eventdatetime) AS lastevent\n FROM agentstatedetail \n WHERE DATE(eventdatetime) = TODAY\n GROUP BY agentid) AS d \nwhere (a.agentid = b.resourceid) \n and (b.assignedteamid = 10) \n and (c.teamname like 'teamnamehere %') \n and (d.agentid = a.agentid and a.eventdatetime = d.lastevent)\ngroup by a.eventdatetime\n ,b.resourcename\n ,b.extension\n ,a.eventtype \norder by eventdatetime desc\n
\nYou may need to look at indexing agentstatedetail to get maximum efficiency.
\nEDIT
\nPer your comment about avoiding the nested query and handling the skipping of agentid values already seen, that's a fairly trivial client-side solution. I don't know exactly how you're handling this on the PHP side, but you'd basically want to do something like this:
\n$data = $db->query("select a.eventdatetime, b.resourcename, b.extension, a.eventtype\n from agentstatedetail as a, resource as b, team as c \n where date(eventdatetime) = date(current)\n and (a.agentid = b.resourceid) and (b.assignedteamid = 10)\n and (c.teamname like 'ITS Help Desk %')\n group by a.eventdatetime, b.resourcename,\n b.extension, a.eventtype\n order by eventdatetime desc");\n\n$agent = Array();\n\nforeach($data as $row){\n if(!$agent[$row['RESOURCENAME']]++) {\n echo\n "" . $row['RESOURCENAME'] . \n " " . $row['EVENTTYPE'] . \n " ";\n }\n}\n\nThe associative array $agent tracks how many records have been seen for a particular agent. When that's empty, it's the first occurrence. The exact non-zero number is not really useful, we just use a post-increment for efficiency, rather than setting $agent[$row['RESOURCENAME']] explicitly in the loop.
\n
soup wrap:
Of course. You just need a sub-query to identify the most recent record for each agent. Something like (untested):
select a.eventdatetime
,b.resourcename
,b.extension
,a.eventtype
from agentstatedetail as a
,resource as b
,team as c
,(SELECT agentid, MAX(eventdatetime) AS lastevent
FROM agentstatedetail
WHERE DATE(eventdatetime) = TODAY
GROUP BY agentid) AS d
where (a.agentid = b.resourceid)
and (b.assignedteamid = 10)
and (c.teamname like 'teamnamehere %')
and (d.agentid = a.agentid and a.eventdatetime = d.lastevent)
group by a.eventdatetime
,b.resourcename
,b.extension
,a.eventtype
order by eventdatetime desc
You may need to look at indexing agentstatedetail to get maximum efficiency.
EDIT
Per your comment about avoiding the nested query and handling the skipping of agentid values already seen, that's a fairly trivial client-side solution. I don't know exactly how you're handling this on the PHP side, but you'd basically want to do something like this:
$data = $db->query("select a.eventdatetime, b.resourcename, b.extension, a.eventtype
from agentstatedetail as a, resource as b, team as c
where date(eventdatetime) = date(current)
and (a.agentid = b.resourceid) and (b.assignedteamid = 10)
and (c.teamname like 'ITS Help Desk %')
group by a.eventdatetime, b.resourcename,
b.extension, a.eventtype
order by eventdatetime desc");
$agent = Array();
foreach($data as $row){
if(!$agent[$row['RESOURCENAME']]++) {
echo
"" . $row['RESOURCENAME'] .
" " . $row['EVENTTYPE'] .
" ";
}
}
The associative array $agent tracks how many records have been seen for a particular agent. When that's empty, it's the first occurrence. The exact non-zero number is not really useful, we just use a post-increment for efficiency, rather than setting $agent[$row['RESOURCENAME']] explicitly in the loop.
qid & accept id:
(21622435, 21623431)
query:
SQL CASE WHEN, when i want an "including" row
soup:
Query:
\nSELECT CASE WHEN mark = 'Ford' THEN 'Ford' END AS Mark,\nCOUNT(*)\nFROM Table1 t\nWHERE mark = 'Ford'\nGROUP BY mark\nUNION ALL\nSELECT CASE WHEN mark = 'Ford' AND Transmition = 'A' \n THEN 'including Fords with automatic transmitions' END AS Mark,\nCOUNT(*)\nFROM Table1 t\nWHERE mark = 'Ford'\nAND Transmition = 'A' \nGROUP BY CASE WHEN mark = 'Ford' AND Transmition = 'A' \n THEN 'including Fords with automatic transmitions' END\n
\nResult:
\n| MARK | COUNT(*) |\n|---------------------------------------------|----------|\n| Ford | 4 |\n| including Fords with automatic transmitions | 3 |\n
\n
soup wrap:
Query:
SELECT CASE WHEN mark = 'Ford' THEN 'Ford' END AS Mark,
COUNT(*)
FROM Table1 t
WHERE mark = 'Ford'
GROUP BY mark
UNION ALL
SELECT CASE WHEN mark = 'Ford' AND Transmition = 'A'
THEN 'including Fords with automatic transmitions' END AS Mark,
COUNT(*)
FROM Table1 t
WHERE mark = 'Ford'
AND Transmition = 'A'
GROUP BY CASE WHEN mark = 'Ford' AND Transmition = 'A'
THEN 'including Fords with automatic transmitions' END
Result:
| MARK | COUNT(*) |
|---------------------------------------------|----------|
| Ford | 4 |
| including Fords with automatic transmitions | 3 |
qid & accept id:
(21626432, 21626549)
query:
Comparing String,if it is NULL in Sql Server 2008
soup:
!=/<> '' is not the same as IS NOT NULL! You need this:
\nIF(Name <> '')\n // Do some stuff\nELSE IF(Phone <> '')\n // Do some stuff\nELSE\n // Do some other stuff\n
\nIf Name or Phone can be NULL, you need this:
\nIF(ISNULL(Name, '') <> '')\n // Do some stuff\nELSE IF(ISNULL(Phone, '') <> '')\n // Do some stuff\nELSE\n // Do some other stuff\n
\nIn SQL, NULL is always <> ''. In fact, in most configurations, NULL is also <> NULL.
\n
soup wrap:
!=/<> '' is not the same as IS NOT NULL! You need this:
IF(Name <> '')
// Do some stuff
ELSE IF(Phone <> '')
// Do some stuff
ELSE
// Do some other stuff
If Name or Phone can be NULL, you need this:
IF(ISNULL(Name, '') <> '')
// Do some stuff
ELSE IF(ISNULL(Phone, '') <> '')
// Do some stuff
ELSE
// Do some other stuff
In SQL, NULL is always <> ''. In fact, in most configurations, NULL is also <> NULL.
qid & accept id:
(21640927, 21640997)
query:
remove duplicate records in oracle
soup:
This works for sql server;
\ndelete a from newproducts as a\n where \nexists(\nselect * from newproducts b\nwhere a.id = b.id and a.date < b.date)\n
\nSame or following should work on oracle;
\ndelete from newproducts a\n where \nexists(\nselect * from newproducts b\nwhere a.id = b.id and a.date < b.date)\n
\n
soup wrap:
This works for sql server;
delete a from newproducts as a
where
exists(
select * from newproducts b
where a.id = b.id and a.date < b.date)
Same or following should work on oracle;
delete from newproducts a
where
exists(
select * from newproducts b
where a.id = b.id and a.date < b.date)
qid & accept id:
(21646708, 21646749)
query:
check which names have the same field in a database
soup:
How about this:
\nselect group_concat(name) as names, time\nfrom table t\ngroup by time\nhaving count(*) > 1;\n
\nThis will give you output such as:
\nNames Time\nRichard,Luigi 8:00\n. . .\n
\nWhich can then format on the application side.
\n
soup wrap:
How about this:
select group_concat(name) as names, time
from table t
group by time
having count(*) > 1;
This will give you output such as:
Names Time
Richard,Luigi 8:00
. . .
Which can then format on the application side.
qid & accept id:
(21669936, 21670216)
query:
Join and get only single row with respect to each id
soup:
You can select only one imageId (the minimum) per each productId by joining to filtered imageId like this :
\nSELECT p.ProductId, ProductName, i.imageId, imagePath\nFROM product p\n INNER JOIN Image i \n ON i.ProductId = p.ProductId\n INNER JOIN\n (SELECT MIN(imageId) As imageId, ProductId\n FROM image\n GROUP BY ProductId\n ) o \n ON o.imageId = i.imageId\n
\nor by filtering imageId using WHERE clause :
\nSELECT p.ProductId, ProductName, imageId, imagePath\nFROM product p\n INNER JOIN Image i \n ON i.ProductId = p.ProductId\nWHERE imageId IN\n (SELECT MIN(imageId) As imageId\n FROM image\n GROUP BY ProductId\n )\n
\nSQLFiddle Demo
\n
soup wrap:
You can select only one imageId (the minimum) per each productId by joining to filtered imageId like this :
SELECT p.ProductId, ProductName, i.imageId, imagePath
FROM product p
INNER JOIN Image i
ON i.ProductId = p.ProductId
INNER JOIN
(SELECT MIN(imageId) As imageId, ProductId
FROM image
GROUP BY ProductId
) o
ON o.imageId = i.imageId
or by filtering imageId using WHERE clause :
SELECT p.ProductId, ProductName, imageId, imagePath
FROM product p
INNER JOIN Image i
ON i.ProductId = p.ProductId
WHERE imageId IN
(SELECT MIN(imageId) As imageId
FROM image
GROUP BY ProductId
)
SQLFiddle Demo
qid & accept id:
(21692871, 21722269)
query:
Combine multiple rows into multiple columns dynamically in SQL Server
soup:
I would do it using dynamic sql, but this is (http://sqlfiddle.com/#!6/a63a6/1/0) the PIVOT solution:
\nSELECT badge, name, [AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match] FROM\n(\nSELECT badge, name, col, val FROM(\n SELECT *, Job+'_KDA' as Col, KDA as Val FROM @T \n UNION\n SELECT *, Job+'_Match' as Col,Match as Val FROM @T\n) t\n) tt\nPIVOT ( max(val) for Col in ([AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match]) ) AS pvt\n
\nBonus: This how PIVOT could be combined with dynamic SQL (http://sqlfiddle.com/#!6/a63a6/7/0), again I would prefer to do it simpler, without PIVOT, but this is just good exercising for me :
\nSELECT badge, name, cast(Job+'_KDA' as nvarchar(128)) as Col, KDA as Val INTO #Temp1 FROM Temp \nINSERT INTO #Temp1 SELECT badge, name, Job+'_Match' as Col, Match as Val FROM Temp\n\nDECLARE @columns nvarchar(max)\nSELECT @columns = COALESCE(@columns + ', ', '') + Col FROM #Temp1 GROUP BY Col\n\nDECLARE @sql nvarchar(max) = 'SELECT badge, name, '+@columns+' FROM #Temp1 PIVOT ( max(val) for Col in ('+@columns+') ) AS pvt'\nexec (@sql)\n\nDROP TABLE #Temp1\n
\n
soup wrap:
I would do it using dynamic sql, but this is (http://sqlfiddle.com/#!6/a63a6/1/0) the PIVOT solution:
SELECT badge, name, [AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match] FROM
(
SELECT badge, name, col, val FROM(
SELECT *, Job+'_KDA' as Col, KDA as Val FROM @T
UNION
SELECT *, Job+'_Match' as Col,Match as Val FROM @T
) t
) tt
PIVOT ( max(val) for Col in ([AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match]) ) AS pvt
Bonus: This how PIVOT could be combined with dynamic SQL (http://sqlfiddle.com/#!6/a63a6/7/0), again I would prefer to do it simpler, without PIVOT, but this is just good exercising for me :
SELECT badge, name, cast(Job+'_KDA' as nvarchar(128)) as Col, KDA as Val INTO #Temp1 FROM Temp
INSERT INTO #Temp1 SELECT badge, name, Job+'_Match' as Col, Match as Val FROM Temp
DECLARE @columns nvarchar(max)
SELECT @columns = COALESCE(@columns + ', ', '') + Col FROM #Temp1 GROUP BY Col
DECLARE @sql nvarchar(max) = 'SELECT badge, name, '+@columns+' FROM #Temp1 PIVOT ( max(val) for Col in ('+@columns+') ) AS pvt'
exec (@sql)
DROP TABLE #Temp1
qid & accept id:
(21731573, 21732258)
query:
Infering missing ranges in a continuous scale
soup:
You don't need the view.
\nThis should do what you want (change the 2 literal to a variable, I tested it w/ a 2).
\nThe first query grabs the discount if there's a discount. The second (connected by union) would grab a penalty if there's a penalty, but of an amount above the first row's from_amount, and the third (connected by union) would grab the penalty if there is one and it's below the first row's from_amount.
\nYou can test it here: http://sqlfiddle.com/#!4/d41d8/25188/0
\nwith discounts as\n( select 25 as from_amount, 39 as to_amount, .02 as discount from dual union all\n select 40 as from_amount, 49 as to_amount, .05 as discount from dual union all\n select 50 as from_amount, 99999 as to_amount, .10 as discount from dual ) \n , penalties as\n( select 5 as from_amount, 9 as to_amount, .10 as penalty from dual union all\n select 10 as from_amount, 19 as to_amount, .05 as penalty from dual)\nselect discount as change\nfrom discounts\nwhere 2 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 2 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 2 < (select min(from_amount) from penalties)\nand from_amount = (select min(from_amount) from penalties)\n
\nRegarding your last edit, the query below would show "0" for any amount for which there is neither a penalty nor a discount (the version of my query above would just show no rows for such a situation). You may prefer that it show zero, like this:
\nselect discount as change\nfrom discounts\nwhere 22 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 22 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 22 < (select min(from_amount) from penalties)\nand from_amount = (select min(from_amount) from penalties)\nunion all\nselect 0 as change\nfrom dual\nwhere not exists (select 1 from discounts where 22 between from_amount and to_amount)\n and not exists (select 1 from penalties where 22 between from_amount and to_amount)\n and 22 >= (select min(from_amount) from penalties)\n
\nIf you change the SQL for that view to the below, you should get the range in between to show zero:
\nselect discounts.from_amount as from_amount,\n discounts.to_amount as to_amount,\n discounts.discount * -1 as change\n from discounts\nunion\nselect penalties.from_amount as from_amount,\n penalties.to_amount as to_amount,\n penalties.penalty as change\n from penalties\nunion\nselect p.to_amount + 1, d.from_amount - 1, 0 as change\n from discounts d, penalties p\n where d.from_amount = (select min(from_amount) from discounts) and\n p.to_amount = (select max(to_amount) from penalties)\n order by from_amount desc\n
\n
soup wrap:
You don't need the view.
This should do what you want (change the 2 literal to a variable, I tested it w/ a 2).
The first query grabs the discount if there's a discount. The second (connected by union) would grab a penalty if there's a penalty, but of an amount above the first row's from_amount, and the third (connected by union) would grab the penalty if there is one and it's below the first row's from_amount.
You can test it here: http://sqlfiddle.com/#!4/d41d8/25188/0
with discounts as
( select 25 as from_amount, 39 as to_amount, .02 as discount from dual union all
select 40 as from_amount, 49 as to_amount, .05 as discount from dual union all
select 50 as from_amount, 99999 as to_amount, .10 as discount from dual )
, penalties as
( select 5 as from_amount, 9 as to_amount, .10 as penalty from dual union all
select 10 as from_amount, 19 as to_amount, .05 as penalty from dual)
select discount as change
from discounts
where 2 between from_amount and to_amount
union all
select -penalty as change
from penalties
where 2 between from_amount and to_amount
union all
select -penalty as change
from penalties
where 2 < (select min(from_amount) from penalties)
and from_amount = (select min(from_amount) from penalties)
Regarding your last edit, the query below would show "0" for any amount for which there is neither a penalty nor a discount (the version of my query above would just show no rows for such a situation). You may prefer that it show zero, like this:
select discount as change
from discounts
where 22 between from_amount and to_amount
union all
select -penalty as change
from penalties
where 22 between from_amount and to_amount
union all
select -penalty as change
from penalties
where 22 < (select min(from_amount) from penalties)
and from_amount = (select min(from_amount) from penalties)
union all
select 0 as change
from dual
where not exists (select 1 from discounts where 22 between from_amount and to_amount)
and not exists (select 1 from penalties where 22 between from_amount and to_amount)
and 22 >= (select min(from_amount) from penalties)
If you change the SQL for that view to the below, you should get the range in between to show zero:
select discounts.from_amount as from_amount,
discounts.to_amount as to_amount,
discounts.discount * -1 as change
from discounts
union
select penalties.from_amount as from_amount,
penalties.to_amount as to_amount,
penalties.penalty as change
from penalties
union
select p.to_amount + 1, d.from_amount - 1, 0 as change
from discounts d, penalties p
where d.from_amount = (select min(from_amount) from discounts) and
p.to_amount = (select max(to_amount) from penalties)
order by from_amount desc
qid & accept id:
(21740326, 21740481)
query:
ROLLUP Function; Replace NULL with 'Total' w/ Column data type INT not VARCHAR
soup:
Test Data
\nDECLARE @MyTable TABLE (Column1 INT,Column2 INT)\nINSERT INTO @MyTable VALUES\n(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)\n\nSELECT CASE\n WHEN GROUPING(Column1) = 1 THEN 'Total'\n ELSE CAST(Column1 AS VARCHAR(10)) --<-- Cast as Varchar\n END Column1\n , SUM(Column2) AS MySum\nFROM @MyTable\nGROUP BY Column1 \nWITH ROLLUP;\n
\nResult Set
\n╔═════════╦═══════╗\n║ Column1 ║ MySum ║\n╠═════════╬═══════╣\n║ 1 ║ 6 ║\n║ 2 ║ 6 ║\n║ 3 ║ 6 ║\n║ Total ║ 18 ║\n╚═════════╩═══════╝\n
\nNote
\nThe reason you couldnt do what you were trying to do is because when you use a CASE statement in each case the returned datatype should be the same.
\nIn above query I have just CAST the colum1 to varchar and it worked.
\n
soup wrap:
Test Data
DECLARE @MyTable TABLE (Column1 INT,Column2 INT)
INSERT INTO @MyTable VALUES
(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)
SELECT CASE
WHEN GROUPING(Column1) = 1 THEN 'Total'
ELSE CAST(Column1 AS VARCHAR(10)) --<-- Cast as Varchar
END Column1
, SUM(Column2) AS MySum
FROM @MyTable
GROUP BY Column1
WITH ROLLUP;
Result Set
╔═════════╦═══════╗
║ Column1 ║ MySum ║
╠═════════╬═══════╣
║ 1 ║ 6 ║
║ 2 ║ 6 ║
║ 3 ║ 6 ║
║ Total ║ 18 ║
╚═════════╩═══════╝
Note
The reason you couldnt do what you were trying to do is because when you use a CASE statement in each case the returned datatype should be the same.
In above query I have just CAST the colum1 to varchar and it worked.
qid & accept id:
(21746336, 21746384)
query:
How to repeat the same SQL query for different column values
soup:
Try this query
\nSELECT ENAME, EID, Salary FROM WHERE ENAME IN ('AAA','DDD','ZZZ');\n
\nor
\nSELECT ENAME, EID, Salary FROM WHERE ENAME IN (SELECT ENAME FROM WHERE );\n
\n
soup wrap:
Try this query
SELECT ENAME, EID, Salary FROM WHERE ENAME IN ('AAA','DDD','ZZZ');
or
SELECT ENAME, EID, Salary FROM WHERE ENAME IN (SELECT ENAME FROM WHERE );
qid & accept id:
(21765911, 21766175)
query:
In MySQL how to write SQL to search for words in a field?
soup:
This will work for your particular example:
\nselect comment \nfrom tbl\nwhere soundex(comment) like '%D510%' or comment like '%dumb%';\n
\nIt won't find misspellings in the comment.
\nEDIT:
\nYou could do something like this:
\nselect comment\nfrom tbl\nwhere soundex(comment) = soundex('dumb') or\n soundex(substring_index(substring_index(comment, ' ', 2), -1) = soundex('dumb') or\n soundex(substring_index(substring_index(comment, ' ', 3), -1) = soundex('dumb') or\n soundex(substring_index(substring_index(comment, ' ', 4), -1) = soundex('dumb') or\n soundex(substring_index(substring_index(comment, ' ', 5), -1) = soundex('dumb');\n
\nA bit brute force.
\nThe need to do this suggests that you should consider a full text index.
\n
soup wrap:
This will work for your particular example:
select comment
from tbl
where soundex(comment) like '%D510%' or comment like '%dumb%';
It won't find misspellings in the comment.
EDIT:
You could do something like this:
select comment
from tbl
where soundex(comment) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 2), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 3), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 4), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 5), -1) = soundex('dumb');
A bit brute force.
The need to do this suggests that you should consider a full text index.
qid & accept id:
(21786302, 21788209)
query:
SQL Sum MTD & YTD
soup:
SELECT\n Period = 'MTD',\n Total_value = SUM(T0.TotalSumSy) \nFROM dbo.INV1 T0 \n INNER JOIN dbo.OINV T1 \n ON T1.DocEntry = T0.DocEntry\nWHERE \n T1.DocDate >= DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010101')\n AND \n T1.DocDate < DATEADD(month,1+DATEDIFF(month,'20010101',GETDATE()),'20010101')\n\nUNION ALL\n\nSELECT\n 'YTD', \n SUM(T0.TotalSumSy) \nFROM dbo.INV1 T0 \n INNER JOIN dbo.OINV T1 \n ON T1.DocEntry = T0.DocEntry\nWHERE \n T1.DocDate >= DATEADD(year,DATEDIFF(year,'20010101',GETDATE()),'20010101')\n AND \n T1.DocDate < DATEADD(year,1+DATEDIFF(year,'20010101',GETDATE()),'20010101') ;\n
\nThe (complicated) conditions at the WHERE clauses are used instead of the YEAR(column) = YEAR(GETDATE() and the other you had previously, so indexes can be used. WHen you apply a function to a column, you make indexes unsuable (with some minor exceptions for some functions and some verios of SQL-Server.) So, the best thing is to try to convert the conditions to this type:
\ncolumn AnyComplexFunction()\n
\n
soup wrap:
SELECT
Period = 'MTD',
Total_value = SUM(T0.TotalSumSy)
FROM dbo.INV1 T0
INNER JOIN dbo.OINV T1
ON T1.DocEntry = T0.DocEntry
WHERE
T1.DocDate >= DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010101')
AND
T1.DocDate < DATEADD(month,1+DATEDIFF(month,'20010101',GETDATE()),'20010101')
UNION ALL
SELECT
'YTD',
SUM(T0.TotalSumSy)
FROM dbo.INV1 T0
INNER JOIN dbo.OINV T1
ON T1.DocEntry = T0.DocEntry
WHERE
T1.DocDate >= DATEADD(year,DATEDIFF(year,'20010101',GETDATE()),'20010101')
AND
T1.DocDate < DATEADD(year,1+DATEDIFF(year,'20010101',GETDATE()),'20010101') ;
The (complicated) conditions at the WHERE clauses are used instead of the YEAR(column) = YEAR(GETDATE() and the other you had previously, so indexes can be used. WHen you apply a function to a column, you make indexes unsuable (with some minor exceptions for some functions and some verios of SQL-Server.) So, the best thing is to try to convert the conditions to this type:
column AnyComplexFunction()
qid & accept id:
(21832842, 21832920)
query:
Get Last message loaded based on message type
soup:
You can use the ROW_NUMBER() Function to assign each of your messages a rank by Message date (starting at 1 again for each message type), then just limit the results to the top ranked message:
\nWITH AllMessages AS\n( SELECT MessageTypes.MessageType, \n Messages.MessageDate, \n Messages.ValueDate, \n Messages.MessageReference, \n Messages.Beneficiary, \n Messages.StatusId,\n MessageStatus.Status, \n BICProfile.BIC,\n RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n ORDER BY Messages.MessageDate DESC)\n FROM Messages \n INNER JOIN MessageStatus \n ON Messages.StatusId = MessageStatus.Id \n INNER JOIN MessageTypes \n ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n INNER JOIN BICProfile \n ON Messages.SenderId = dbo.BICProfile.BicId \n WHERE BICProfile.BIC = 'someValue'\n AND Messages.StatusId IN (4, 5, 6)\n)\nSELECT MessageType, \n MessageDate, \n ValueDate, \n MessageReference, \n Beneficiary, \n StatusId,\n Status, \n BIC \nFROM AllMessages\nWHERE RowNumber = 1;\n
\nIf you can't use ROW_NUMBER then you can use a subquery to get the latest message date per type:
\nSELECT Messages.MessageTypeID, MessageDate = MAX(Messages.MessageDate)\nFROM Messages\n INNER JOIN BICProfile \n ON Messages.SenderId = dbo.BICProfile.BicId \nWHERE BICProfile.BIC = 'someValue'\nAND Messages.StatusId IN (4, 5, 6)\nGROUP BY Messages.MessageTypeID\n
\nThen inner join the results of this back to your main query to filter the results:
\nSELECT MessageTypes.MessageType, \n Messages.MessageDate, \n Messages.ValueDate, \n Messages.MessageReference, \n Messages.Beneficiary, \n Messages.StatusId,\n MessageStatus.Status, \n BICProfile.BIC\nFROM Messages \n INNER JOIN MessageStatus \n ON Messages.StatusId = MessageStatus.Id \n INNER JOIN MessageTypes \n ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n INNER JOIN BICProfile \n ON Messages.SenderId = dbo.BICProfile.BicId \n INNER JOIN \n ( SELECT Messages.MessageTypeID, \n MessageDate = MAX(Messages.MessageDate)\n FROM Messages\n INNER JOIN BICProfile \n ON Messages.SenderId = dbo.BICProfile.BicId \n WHERE BICProfile.BIC = 'someValue'\n AND Messages.StatusId IN (4, 5, 6)\n GROUP BY Messages.MessageTypeID\n ) AS MaxMessage\n ON MaxMessage.MessageTypeID = Messages.MessageTypeID\n AND MaxMessage.MessageDate = Messages.MessageDate\nWHERE BICProfile.BIC = 'someValue'\nAND Messages.StatusId IN (4, 5, 6);\n
\nN.B This second method will return multiple rows per message type if the latest message date is common among more than one message. This behaviour can be replicated in the first query by replacing ROW_NUMBER with RANK
\n
\nEDIT
\nIf you will have multiple messages with the same date and only want to return one of them you need to expand the ordering within the row_number function, i.e. if you wanted to pick the message with the maximum id when there were ties you could make it:
\nRowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n ORDER BY Messages.MessageDate DESC,\n Messages.MessageID DESC)\n
\nSo the full query would be:
\nWITH AllMessages AS\n( SELECT MessageTypes.MessageType, \n Messages.MessageDate, \n Messages.ValueDate, \n Messages.MessageReference, \n Messages.Beneficiary, \n Messages.StatusId,\n MessageStatus.Status, \n BICProfile.BIC,\n RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n ORDER BY Messages.MessageDate DESC,\n Messages.MessageID DESC)\n FROM Messages \n INNER JOIN MessageStatus \n ON Messages.StatusId = MessageStatus.Id \n INNER JOIN MessageTypes \n ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n INNER JOIN BICProfile \n ON Messages.SenderId = dbo.BICProfile.BicId \n WHERE BICProfile.BIC = 'someValue'\n AND Messages.StatusId IN (4, 5, 6)\n)\nSELECT MessageType, \n MessageDate, \n ValueDate, \n MessageReference, \n Beneficiary, \n StatusId,\n Status, \n BIC \nFROM AllMessages\nWHERE RowNumber = 1;\n
\n
soup wrap:
You can use the ROW_NUMBER() Function to assign each of your messages a rank by Message date (starting at 1 again for each message type), then just limit the results to the top ranked message:
WITH AllMessages AS
( SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC,
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC)
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
)
SELECT MessageType,
MessageDate,
ValueDate,
MessageReference,
Beneficiary,
StatusId,
Status,
BIC
FROM AllMessages
WHERE RowNumber = 1;
If you can't use ROW_NUMBER then you can use a subquery to get the latest message date per type:
SELECT Messages.MessageTypeID, MessageDate = MAX(Messages.MessageDate)
FROM Messages
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
GROUP BY Messages.MessageTypeID
Then inner join the results of this back to your main query to filter the results:
SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
INNER JOIN
( SELECT Messages.MessageTypeID,
MessageDate = MAX(Messages.MessageDate)
FROM Messages
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
GROUP BY Messages.MessageTypeID
) AS MaxMessage
ON MaxMessage.MessageTypeID = Messages.MessageTypeID
AND MaxMessage.MessageDate = Messages.MessageDate
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6);
N.B This second method will return multiple rows per message type if the latest message date is common among more than one message. This behaviour can be replicated in the first query by replacing ROW_NUMBER with RANK
EDIT
If you will have multiple messages with the same date and only want to return one of them you need to expand the ordering within the row_number function, i.e. if you wanted to pick the message with the maximum id when there were ties you could make it:
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC,
Messages.MessageID DESC)
So the full query would be:
WITH AllMessages AS
( SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC,
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC,
Messages.MessageID DESC)
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
)
SELECT MessageType,
MessageDate,
ValueDate,
MessageReference,
Beneficiary,
StatusId,
Status,
BIC
FROM AllMessages
WHERE RowNumber = 1;
qid & accept id:
(21835289, 21835578)
query:
Store multiple data tables in single database table
soup:
Consider this\nCreate three table product, feature, product_feature and maybe product_photos
\nProduct database will be
\npid, p_name, p_description, p_price, ...\ninsert query \nINSERT INTO (p_name, p_description, p_price, ....) VALUES(?,?,?,...)\n
\nfeature table will
\nfid, f_name, f_description, ...\ninsert query \nINSERT INTO (F_name, F_description, ....) VALUES(?,?,?,...)\n
\nnow the product_feature table will be
\nid, pid, fid \nquery for one product\n// say a product Id is 1\nINSERT INTO (pid, fid) VALUES(1, 10) \nINSERT INTO (pid, fid) VALUES(1, 15\nINSERT INTO (pid, fid) VALUES(1, 30) \n
\nwhere pid and fid are foreign keys with relations, phpmyadmin can do that for you\nyou can then add a product with multiple features
\nthen maybe the photo table
\nfoto_id, photo_name, photo_path ....\n
\nuse InnoDB for all the tables
\nLet me know if you need further help
\n
soup wrap:
Consider this
Create three table product, feature, product_feature and maybe product_photos
Product database will be
pid, p_name, p_description, p_price, ...
insert query
INSERT INTO (p_name, p_description, p_price, ....) VALUES(?,?,?,...)
feature table will
fid, f_name, f_description, ...
insert query
INSERT INTO (F_name, F_description, ....) VALUES(?,?,?,...)
now the product_feature table will be
id, pid, fid
query for one product
// say a product Id is 1
INSERT INTO (pid, fid) VALUES(1, 10)
INSERT INTO (pid, fid) VALUES(1, 15
INSERT INTO (pid, fid) VALUES(1, 30)
where pid and fid are foreign keys with relations, phpmyadmin can do that for you
you can then add a product with multiple features
then maybe the photo table
foto_id, photo_name, photo_path ....
use InnoDB for all the tables
Let me know if you need further help
qid & accept id:
(21841623, 21843075)
query:
Combing multiple rows into one row
soup:
I am not quite sure how the index in your query matches the index column in your data. But the query that you want is:
\nSELECT index,\n max(CASE WHEN index = 1 THEN Booknumber END) AS BookNumber1 ,\n max(CASE WHEN index = 2 THEN Booknumber END) AS BookNumber2,\n max(CASE WHEN index = 3 THEN Booknumber END) AS BookNumber3\nFROM Mytable\nGROUP BY index;\n
\nGive your data, the query seems more like:
\nSELECT index,\n max(CASE WHEN ind = 1 THEN Booknumber END) AS BookNumber1 ,\n max(CASE WHEN ind = 2 THEN Booknumber END) AS BookNumber2,\n max(CASE WHEN ind = 3 THEN Booknumber END) AS BookNumber3\nFROM (select mt.*, row_number() over (partition by index order by BookNumber) as ind\n from Mytable mt\n ) mt\nGROUP BY index;\n
\nBy the way, "index" is a reserved word, so I assume that it is just a placeholder for another column name. Otherwise, you need to escape it with double quotes or square braces.
\n
soup wrap:
I am not quite sure how the index in your query matches the index column in your data. But the query that you want is:
SELECT index,
max(CASE WHEN index = 1 THEN Booknumber END) AS BookNumber1 ,
max(CASE WHEN index = 2 THEN Booknumber END) AS BookNumber2,
max(CASE WHEN index = 3 THEN Booknumber END) AS BookNumber3
FROM Mytable
GROUP BY index;
Give your data, the query seems more like:
SELECT index,
max(CASE WHEN ind = 1 THEN Booknumber END) AS BookNumber1 ,
max(CASE WHEN ind = 2 THEN Booknumber END) AS BookNumber2,
max(CASE WHEN ind = 3 THEN Booknumber END) AS BookNumber3
FROM (select mt.*, row_number() over (partition by index order by BookNumber) as ind
from Mytable mt
) mt
GROUP BY index;
By the way, "index" is a reserved word, so I assume that it is just a placeholder for another column name. Otherwise, you need to escape it with double quotes or square braces.
qid & accept id:
(21869166, 21869585)
query:
MySQL: For each row in table, change one row in another table
soup:
You are selecting a field that is not part of the group by or being aggregated.
\nSELECT data.id from \ndata INNER JOIN changes ON\n data.c=changes.c_old AND data.g=changes.g \nGROUP BY changes.id\n
\nYou should use an aggregate function on the data.id in the select, or add data.id to the groupby (though I suspect that is not the result you want either)
\nThe INNER JOIN is result in this dataset
\n+---------+--------+--------+------------+---------------+---------------+-----------+\n| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n| 1 | 1 | 2 | 1 | 1 | 2 | 2 |\n| 1 | 1 | 2 | 3 | 1 | 2 | 2 |\n| 2 | 1 | 2 | 1 | 1 | 2 | 2 |\n| 2 | 1 | 2 | 3 | 1 | 2 | 2 |\n| 3 | 1 | 2 | 1 | 1 | 2 | 2 |\n| 3 | 1 | 2 | 3 | 1 | 2 | 2 |\n| 6 | 2 | 3 | 2 | 2 | 1 | 3 |\n| 7 | 2 | 3 | 2 | 2 | 1 | 3 |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n
\n1,2,3 are expanded out due to multiple matches in the join, and 4,5 are eliminated due to no match
\nYou then are grouping by changes.id, which is going to result in (showing with values in CSV list after grouping)
\n+---------+--------+--------+------------+---------------+---------------+-----------+\n| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n| 1,2,3 | 1,1,1 | 2,2,2 | 1 | 1,1,1 | 2,2,2 | 2,2,2 |\n| 1,2,3 | 1,1,1 | 2,2,2 | 3 | 1,1,1 | 2,2,2 | 2,2,2 |\n| 6,7 | 2,2 | 3,3 | 2 | 2,2 | 1,1 | 3,3 |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n
\nSince no aggregate or deterministic way of choosing the values from the available options, you are getting the 1 from data.id chosen for both changes.id 1 and 3
\nDepending on what you are wanting, are you wanting 3 rows? all distinct values? you should add that deterministic behavior to the select.
\nbtw, I am pretty sure other SQL engines would not allow that select (such as MSSQL) because its ambiguous. As for MySQL behavior in that situation, I believe it chooses the first value from the first row stored, and thus why you probably get 1 in both cases, but it is free to choose whatever value it wishes.
\nhttp://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html
\n\nMySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Sorting of the result set occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.
\n
\n
soup wrap:
You are selecting a field that is not part of the group by or being aggregated.
SELECT data.id from
data INNER JOIN changes ON
data.c=changes.c_old AND data.g=changes.g
GROUP BY changes.id
You should use an aggregate function on the data.id in the select, or add data.id to the groupby (though I suspect that is not the result you want either)
The INNER JOIN is result in this dataset
+---------+--------+--------+------------+---------------+---------------+-----------+
| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |
+---------+--------+--------+------------+---------------+---------------+-----------+
| 1 | 1 | 2 | 1 | 1 | 2 | 2 |
| 1 | 1 | 2 | 3 | 1 | 2 | 2 |
| 2 | 1 | 2 | 1 | 1 | 2 | 2 |
| 2 | 1 | 2 | 3 | 1 | 2 | 2 |
| 3 | 1 | 2 | 1 | 1 | 2 | 2 |
| 3 | 1 | 2 | 3 | 1 | 2 | 2 |
| 6 | 2 | 3 | 2 | 2 | 1 | 3 |
| 7 | 2 | 3 | 2 | 2 | 1 | 3 |
+---------+--------+--------+------------+---------------+---------------+-----------+
1,2,3 are expanded out due to multiple matches in the join, and 4,5 are eliminated due to no match
You then are grouping by changes.id, which is going to result in (showing with values in CSV list after grouping)
+---------+--------+--------+------------+---------------+---------------+-----------+
| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |
+---------+--------+--------+------------+---------------+---------------+-----------+
| 1,2,3 | 1,1,1 | 2,2,2 | 1 | 1,1,1 | 2,2,2 | 2,2,2 |
| 1,2,3 | 1,1,1 | 2,2,2 | 3 | 1,1,1 | 2,2,2 | 2,2,2 |
| 6,7 | 2,2 | 3,3 | 2 | 2,2 | 1,1 | 3,3 |
+---------+--------+--------+------------+---------------+---------------+-----------+
Since no aggregate or deterministic way of choosing the values from the available options, you are getting the 1 from data.id chosen for both changes.id 1 and 3
Depending on what you are wanting, are you wanting 3 rows? all distinct values? you should add that deterministic behavior to the select.
btw, I am pretty sure other SQL engines would not allow that select (such as MSSQL) because its ambiguous. As for MySQL behavior in that situation, I believe it chooses the first value from the first row stored, and thus why you probably get 1 in both cases, but it is free to choose whatever value it wishes.
http://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html
MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Sorting of the result set occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.
qid & accept id:
(21875568, 21875758)
query:
using if count in the part of the sql statement
soup:
Using subquery, i would have done something like this in place of last condition :
\nmessages.from = 'Jack' AND \ntype = 'message' AND \n1 =(select count(primary_key) from messages /* 1=count : this would ensure that \n condition works only if \n 1 row is returned*/\nwhere (messages.from='Jack' AND type='message') )\n
\nSo final SQL would have been :
\nSELECT \n *\nFROM\n messages\nWHERE\n (messages.to = 'Jack' AND (type = 'message' OR type = 'reply'))\n OR (messages.from = 'Jack' AND type = 'reply')\n OR (messages.from = 'Jack' AND \n type = 'message' AND \n 1 =(select count(primary_key) from messages\n where (messages.from='Jack' AND type='message') ))\n\n ORDER BY messages.message_id DESC , messages.id DESC\n
\n
soup wrap:
Using subquery, i would have done something like this in place of last condition :
messages.from = 'Jack' AND
type = 'message' AND
1 =(select count(primary_key) from messages /* 1=count : this would ensure that
condition works only if
1 row is returned*/
where (messages.from='Jack' AND type='message') )
So final SQL would have been :
SELECT
*
FROM
messages
WHERE
(messages.to = 'Jack' AND (type = 'message' OR type = 'reply'))
OR (messages.from = 'Jack' AND type = 'reply')
OR (messages.from = 'Jack' AND
type = 'message' AND
1 =(select count(primary_key) from messages
where (messages.from='Jack' AND type='message') ))
ORDER BY messages.message_id DESC , messages.id DESC
qid & accept id:
(21950759, 21950834)
query:
Extracting data from two tables with same in the form of appending
soup:
Use union
\nUNION is used to combine the result from multiple SELECT statements into a single result set.\n
\n
\nselect * from jay\nUNION \nselect * from Ren\n
\nSQl FIDDLE
\nOUTPUT
\n
\n
soup wrap:
Use union
UNION is used to combine the result from multiple SELECT statements into a single result set.
select * from jay
UNION
select * from Ren
SQl FIDDLE
OUTPUT

qid & accept id:
(21956650, 21957167)
query:
SQL - How to list items which are below the average
soup:
Change the select list for whatever columns you want to display, but this will limit the results as you want, for a given testid (replace testXYZ with the actual test you're searching on)
\nSELECT t.Test_name, s.*, sc.*\n FROM Tests t\n JOIN Scores sc\n ON t.id_Tests = sc.Tests_id_Tests\n JOIN Students s\n ON sc.Students_id_Students = s.id_Students\n WHERE t.id_Tests = 'textXYZ'\n and sc.result <\n (select avg(x.result)\n from scores x\n where sc.Tests_id_Tests = x.Tests_id_Tests)\n
\nNote: To run this for ALL tests, and have scores limited to those that are below the average for each test, you would just leave that one line out of the where clause and run:
\nSELECT t.Test_name, s.*, sc.*\n FROM Tests t\n JOIN Scores sc\n ON t.id_Tests = sc.Tests_id_Tests\n JOIN Students s\n ON sc.Students_id_Students = s.id_Students\n WHERE sc.result <\n (select avg(x.result)\n from scores x\n where sc.Tests_id_Tests = x.Tests_id_Tests)\n
\n
soup wrap:
Change the select list for whatever columns you want to display, but this will limit the results as you want, for a given testid (replace testXYZ with the actual test you're searching on)
SELECT t.Test_name, s.*, sc.*
FROM Tests t
JOIN Scores sc
ON t.id_Tests = sc.Tests_id_Tests
JOIN Students s
ON sc.Students_id_Students = s.id_Students
WHERE t.id_Tests = 'textXYZ'
and sc.result <
(select avg(x.result)
from scores x
where sc.Tests_id_Tests = x.Tests_id_Tests)
Note: To run this for ALL tests, and have scores limited to those that are below the average for each test, you would just leave that one line out of the where clause and run:
SELECT t.Test_name, s.*, sc.*
FROM Tests t
JOIN Scores sc
ON t.id_Tests = sc.Tests_id_Tests
JOIN Students s
ON sc.Students_id_Students = s.id_Students
WHERE sc.result <
(select avg(x.result)
from scores x
where sc.Tests_id_Tests = x.Tests_id_Tests)
qid & accept id:
(21969425, 21969491)
query:
SQL Plus - Running a query based on user input
soup:
Try:
\nSelect columnA, columnB, columnC, columnD\nfrom myTable t\nwhere t.&searchColumn in ('&searchParam')\n
\nAlso if they are going to be typing in the substitution values, you don't need to define them earlier.
\nAnd I would change "IN" to "="
\nOr if they need to type in multiple values to search on:
\nSelect columnA, columnB, columnC,columnD\nfrom myTable t\nwhere t.&searchColumn in (&searchParam)\n
\nBut they will have to have correct input, such as:
\n'string','string1'
\n2010,2011
\nIf you want them to be able to type the substitution values into the file (at the top) using DEFINE, this is what you would do:
\ndefine searchColumn = column_name_here\ndefine searchParam = search_term_here\n\nSelect columnA, columnB, columnC,columnD\nfrom myTable t\nwhere t.&searchColumn in ('&searchParam')\n
\nAgain, you might want to change IN to =
\nOn a side note, if the substiution variable is not defined, the user will be prompted to enter it. So it depends on whether you want them to be prompted to enter it each time it's run, or if you want them to be able to define the variables at the top of the script, before they run it.
\n
soup wrap:
Try:
Select columnA, columnB, columnC, columnD
from myTable t
where t.&searchColumn in ('&searchParam')
Also if they are going to be typing in the substitution values, you don't need to define them earlier.
And I would change "IN" to "="
Or if they need to type in multiple values to search on:
Select columnA, columnB, columnC,columnD
from myTable t
where t.&searchColumn in (&searchParam)
But they will have to have correct input, such as:
'string','string1'
2010,2011
If you want them to be able to type the substitution values into the file (at the top) using DEFINE, this is what you would do:
define searchColumn = column_name_here
define searchParam = search_term_here
Select columnA, columnB, columnC,columnD
from myTable t
where t.&searchColumn in ('&searchParam')
Again, you might want to change IN to =
On a side note, if the substiution variable is not defined, the user will be prompted to enter it. So it depends on whether you want them to be prompted to enter it each time it's run, or if you want them to be able to define the variables at the top of the script, before they run it.
qid & accept id:
(21977220, 21980477)
query:
Querying time series in Postgress
soup:
One problem with the way you are currently doing it is that it does not generate a \ndata point in any invervals which do not have any sample data. For example, if the \nuser wants a chart from seconds 0 - 10 in steps of 1, then your chart won't have any\npoints after 5. Maybe that doesn't matter in your use case though.
\nAnother issue, as you indicated, it would be nice to be able to use some kind of\nlinear interpolation to attribute the measurements in case the resolution of the\nrequested plots is greater than the available data.
\nTo solve the first of these, instead of selecting data purely from the sample table,\nwe can join together the data with a generated series that matches the user's\nrequest. The latter can be generated using this:
\nSELECT int4range(rstart, rstart+1) AS srange \nFROM generate_series(0,10,1) AS seq(rstart)\n
\nThe above query will generate a series of ranges, from 0 to 10 with a step size\nof 1. The output looks like this:
\n srange\n---------\n [0,1)\n [1,2)\n [2,3)\n [3,4)\n [4,5)\n [5,6)\n [6,7)\n [7,8)\n [8,9)\n [9,10)\n [10,11)\n(11 rows)\n
\nWe can join this to the data table, using the && operator (which filters on overlap).
\nThe second point can be addressed by calculating the proportion of each data row\nwhich falls into each sample window.
\nHere is the full query:
\nSELECT lower(srange) AS t,\n sum (CASE \n -- when data range is fully contained in sample range\n WHEN drange <@ srange THEN value\n -- when data range and sample range overlap, calculate the ratio of the intersection\n -- and use that to apportion the value\n ELSE CAST (value AS DOUBLE PRECISION) * (upper(drange*srange) - lower(drange*srange)) / (upper(drange)-lower(drange))\n END) AS value\nFROM (\n -- Generate the range to be plotted (the sample ranges).\n -- To change the start / end of the range, change the 1st 2 arguments\n -- of the generate_series. To change the step size change BOTH the 3rd\n -- argument and the amount added to rstart (they must be equal).\n SELECT int4range(rstart, rstart+1) AS srange FROM generate_series(0,10,1) AS seq(rstart)\n) AS s\nLEFT JOIN (\n -- Note the use of the lag window function so that for each row, we get\n -- a range from the previous timestamp up to the current timestamp\n SELECT int4range(coalesce(lag(ts) OVER (order by ts), 0), ts) AS drange, value FROM data\n) AS d ON srange && drange\nGROUP BY lower(srange)\nORDER BY lower(srange)\n
\nResult:
\n t | value\n----+------------------\n 0 | 5\n 1 | 2\n 2 | 3.33333333333333\n 3 | 3.33333333333333\n 4 | 3.33333333333333\n 5 |\n 6 |\n 7 |\n 8 |\n 9 |\n 10 |\n(11 rows)\n
\nIt is not likely any index will be used on ts in this query as it stands, and\nif the data table is large then performance is going to be dreadful.
\nThere are some things you could try to help with this. One suggestion could be\nto redesign the data table such that the first column contains the time range of\nthe data sample, rather than just the ending time, and then you could add a\nrange index. You could then remove the windowing function from the second\nsubquery, and hopefully the index can be used.
\nRead up on range types here.
\nCaveat Emptor: I have not tested this other than on the tiny data sample you supplied.\nI have used something similar to this for a somewhat similar purpose though.
\n
soup wrap:
One problem with the way you are currently doing it is that it does not generate a
data point in any invervals which do not have any sample data. For example, if the
user wants a chart from seconds 0 - 10 in steps of 1, then your chart won't have any
points after 5. Maybe that doesn't matter in your use case though.
Another issue, as you indicated, it would be nice to be able to use some kind of
linear interpolation to attribute the measurements in case the resolution of the
requested plots is greater than the available data.
To solve the first of these, instead of selecting data purely from the sample table,
we can join together the data with a generated series that matches the user's
request. The latter can be generated using this:
SELECT int4range(rstart, rstart+1) AS srange
FROM generate_series(0,10,1) AS seq(rstart)
The above query will generate a series of ranges, from 0 to 10 with a step size
of 1. The output looks like this:
srange
---------
[0,1)
[1,2)
[2,3)
[3,4)
[4,5)
[5,6)
[6,7)
[7,8)
[8,9)
[9,10)
[10,11)
(11 rows)
We can join this to the data table, using the && operator (which filters on overlap).
The second point can be addressed by calculating the proportion of each data row
which falls into each sample window.
Here is the full query:
SELECT lower(srange) AS t,
sum (CASE
-- when data range is fully contained in sample range
WHEN drange <@ srange THEN value
-- when data range and sample range overlap, calculate the ratio of the intersection
-- and use that to apportion the value
ELSE CAST (value AS DOUBLE PRECISION) * (upper(drange*srange) - lower(drange*srange)) / (upper(drange)-lower(drange))
END) AS value
FROM (
-- Generate the range to be plotted (the sample ranges).
-- To change the start / end of the range, change the 1st 2 arguments
-- of the generate_series. To change the step size change BOTH the 3rd
-- argument and the amount added to rstart (they must be equal).
SELECT int4range(rstart, rstart+1) AS srange FROM generate_series(0,10,1) AS seq(rstart)
) AS s
LEFT JOIN (
-- Note the use of the lag window function so that for each row, we get
-- a range from the previous timestamp up to the current timestamp
SELECT int4range(coalesce(lag(ts) OVER (order by ts), 0), ts) AS drange, value FROM data
) AS d ON srange && drange
GROUP BY lower(srange)
ORDER BY lower(srange)
Result:
t | value
----+------------------
0 | 5
1 | 2
2 | 3.33333333333333
3 | 3.33333333333333
4 | 3.33333333333333
5 |
6 |
7 |
8 |
9 |
10 |
(11 rows)
It is not likely any index will be used on ts in this query as it stands, and
if the data table is large then performance is going to be dreadful.
There are some things you could try to help with this. One suggestion could be
to redesign the data table such that the first column contains the time range of
the data sample, rather than just the ending time, and then you could add a
range index. You could then remove the windowing function from the second
subquery, and hopefully the index can be used.
Read up on range types here.
Caveat Emptor: I have not tested this other than on the tiny data sample you supplied.
I have used something similar to this for a somewhat similar purpose though.
qid & accept id:
(22021194, 22021360)
query:
How to set a value with the return value of a stored procedure
soup:
Create an OUTPUT parameter inside your stored procedure and use that Parameter to store the value and then use that parameter inside your Update statement. Something like this....
\nDECLARE @OutParam Datatype;\n\nEXECUTE SP1 @param1=C1, @OUT_Param = @OutParam OUTPUT --<--\n\n--Now you can use this OUTPUT parameter in your Update statement.\n\nUPDATE Table1 \nSET C2 = @OutParam\n
\nUPDATE
\nAfter reading your comments I think this is what you are trying to do pass value of C1 Column from Table Table1 to Stored Procedure and then Update the Relevant C2 Column of Table1 with the returned value of stored procedure.
\nFor this best way to do is to Create a Table Type Parameter and pass the values of C1 as a table. See here for a detailed answer about how to pass a table to a stored procedure.
\nI havent tested it But in this situation I guess you could do something like this.. I dont recomend this method if you have a large table. in that case you are better off with a table type parameter Procedure.
\n-- Get C1 Values In a Temp Table\n\nSELECT DISTINCT C1 INTO #temp\nFROM Table1\n\n-- Declare Two Varibles \n--1) Return Type of Stored Procedure\n--2) Datatype of C1\n\nDECLARE @C1_Var DataType;\nDECLARE @param1 DataType;\n\nWHILE EXISTS(SELECT * FROM #temp)\nBEGIN\n -- Select Top 1 C1 to @C1_Var\n SELECT TOP 1 @C1_Var = C1 FROM #temp\n\n --Execute Proc and returned Value in @param1\n EXECUTE SP1 @param1 = @C1_Var \n\n -- Update the table\n UPDATE Table1\n SET C2 = @param1\n WHERE C1 = @C1_Var\n\n -- Delete from Temp Table to entually exit the loop\n DELETE FROM #temp WHERE C1 = @Var \n\nEND\n
\n
soup wrap:
Create an OUTPUT parameter inside your stored procedure and use that Parameter to store the value and then use that parameter inside your Update statement. Something like this....
DECLARE @OutParam Datatype;
EXECUTE SP1 @param1=C1, @OUT_Param = @OutParam OUTPUT --<--
--Now you can use this OUTPUT parameter in your Update statement.
UPDATE Table1
SET C2 = @OutParam
UPDATE
After reading your comments I think this is what you are trying to do pass value of C1 Column from Table Table1 to Stored Procedure and then Update the Relevant C2 Column of Table1 with the returned value of stored procedure.
For this best way to do is to Create a Table Type Parameter and pass the values of C1 as a table. See here for a detailed answer about how to pass a table to a stored procedure.
I havent tested it But in this situation I guess you could do something like this.. I dont recomend this method if you have a large table. in that case you are better off with a table type parameter Procedure.
-- Get C1 Values In a Temp Table
SELECT DISTINCT C1 INTO #temp
FROM Table1
-- Declare Two Varibles
--1) Return Type of Stored Procedure
--2) Datatype of C1
DECLARE @C1_Var DataType;
DECLARE @param1 DataType;
WHILE EXISTS(SELECT * FROM #temp)
BEGIN
-- Select Top 1 C1 to @C1_Var
SELECT TOP 1 @C1_Var = C1 FROM #temp
--Execute Proc and returned Value in @param1
EXECUTE SP1 @param1 = @C1_Var
-- Update the table
UPDATE Table1
SET C2 = @param1
WHERE C1 = @C1_Var
-- Delete from Temp Table to entually exit the loop
DELETE FROM #temp WHERE C1 = @Var
END
qid & accept id:
(22040663, 22057600)
query:
Flattening nested record in postgres
soup:
You don't need the ROW constructor there, and so you can expand the record by using (foo).*:
\nWITH RECURSIVE t AS (\n SELECT d as foo FROM some_multicolumn_table as d\nUNION ALL\n SELECT foo FROM t WHERE random() < .5\n)\nSELECT (foo).* FROM t;\n
\nAlthough this query could be simple written as:
\nWITH RECURSIVE t AS (\n SELECT d.* FROM some_multicolumn_table as d\nUNION ALL\n SELECT t.* FROM t WHERE random() < .5\n)\nSELECT * FROM t;\n
\nAnd I recommend trying to keep it as simple as possible. But I'm assuming it was just an exemplification.
\n
soup wrap:
You don't need the ROW constructor there, and so you can expand the record by using (foo).*:
WITH RECURSIVE t AS (
SELECT d as foo FROM some_multicolumn_table as d
UNION ALL
SELECT foo FROM t WHERE random() < .5
)
SELECT (foo).* FROM t;
Although this query could be simple written as:
WITH RECURSIVE t AS (
SELECT d.* FROM some_multicolumn_table as d
UNION ALL
SELECT t.* FROM t WHERE random() < .5
)
SELECT * FROM t;
And I recommend trying to keep it as simple as possible. But I'm assuming it was just an exemplification.
qid & accept id:
(22052942, 22053080)
query:
Replace column output in a more readable form Oracle - SQL
soup:
If you want to hard-code the translations
\nSELECT (CASE paymentType\n WHEN 'ePay' THEN 'electronic payment'\n WHEN 'cPay' THEN 'cash payment'\n WHEN 'dPay' THEN 'deposit account payment' \n WHEN 'ccPay' THEN 'credit card payment'\n ELSE paymentType\n END) payment_type,\n other_columns\n FROM payment\n
\nNormally, though, you'd create a lookup table and join to that
\nSELECT payment_type.payment_type_description,\n <>\n FROM payment pay\n JOIN payment_type ON (pay.paymentType = payment_type.paymentType)\n
\n
soup wrap:
If you want to hard-code the translations
SELECT (CASE paymentType
WHEN 'ePay' THEN 'electronic payment'
WHEN 'cPay' THEN 'cash payment'
WHEN 'dPay' THEN 'deposit account payment'
WHEN 'ccPay' THEN 'credit card payment'
ELSE paymentType
END) payment_type,
other_columns
FROM payment
Normally, though, you'd create a lookup table and join to that
SELECT payment_type.payment_type_description,
<>
FROM payment pay
JOIN payment_type ON (pay.paymentType = payment_type.paymentType)
qid & accept id:
(22055558, 22065746)
query:
Stored procedure to update temp table based on Date in SQL Server
soup:
You can write a stored procedure, like you've done and pass the date to it.
\nCREATE PROCEDURE check_scoretable \n( \n @pDate DATE = NULL\n)\nas\n
\nHowever, rather than a cursor, do something like
\nSELECT tm.name,sum(tm.noMatches) as NumberMatches,sum(tm.ownGoals) as OwnGoals,\n sum(tm.otherGoals) as Othergoals,sum(tm.Points) as Points\nFROM Team tm\nJOIN Matches mc on mc.homeId=tm.id or mc.outId=tm.id\nWHERE mc.matchDate <= @pDate\n
\nThis will give you the results you are looking for.
\nCAVEAT: Your database design is not good, because of the redundant data in it. For example, you are tracking the number of matches in the team table, when you can compute the number of matches by
\nSELECT count(*) FROM matches WHERE homeId=@id or OutId=@id\n
\nSame type of operation for total goals, etc.
\nThe problem you might run into is, if for some reason, the team record is not updated, the number of matches listed in team could be different than the number of matches from totaling up the matches played.
\n
soup wrap:
You can write a stored procedure, like you've done and pass the date to it.
CREATE PROCEDURE check_scoretable
(
@pDate DATE = NULL
)
as
However, rather than a cursor, do something like
SELECT tm.name,sum(tm.noMatches) as NumberMatches,sum(tm.ownGoals) as OwnGoals,
sum(tm.otherGoals) as Othergoals,sum(tm.Points) as Points
FROM Team tm
JOIN Matches mc on mc.homeId=tm.id or mc.outId=tm.id
WHERE mc.matchDate <= @pDate
This will give you the results you are looking for.
CAVEAT: Your database design is not good, because of the redundant data in it. For example, you are tracking the number of matches in the team table, when you can compute the number of matches by
SELECT count(*) FROM matches WHERE homeId=@id or OutId=@id
Same type of operation for total goals, etc.
The problem you might run into is, if for some reason, the team record is not updated, the number of matches listed in team could be different than the number of matches from totaling up the matches played.
qid & accept id:
(22134638, 22134687)
query:
Max count() for every group of GROUP BY
soup:
To get the option counts, you can do:
\nselect `group`, `option`, count(*) as cnt\nfrom table t\ngroup by `group`, `option`;\n
\nThere are several ways to get the option corresponding to the maximum value. I think the easiest in this case is the substring_index()/group_concat() method:
\nselect `group`,\n substring_index(group_concat(`option` order by cnt desc), ',', 1) as maxoption\nfrom (select `group`, `option`, count(*) as cnt\n from table t\n group by `group`, `option`\n ) tgo\ngroup by `group`;\n
\n
soup wrap:
To get the option counts, you can do:
select `group`, `option`, count(*) as cnt
from table t
group by `group`, `option`;
There are several ways to get the option corresponding to the maximum value. I think the easiest in this case is the substring_index()/group_concat() method:
select `group`,
substring_index(group_concat(`option` order by cnt desc), ',', 1) as maxoption
from (select `group`, `option`, count(*) as cnt
from table t
group by `group`, `option`
) tgo
group by `group`;
qid & accept id:
(22170964, 22173791)
query:
Laravel 4 Eloquent - Similar products based on price
soup:
Making Tzook's answer more Laravel friendly.
\nIn your Variant model, add the function.
\npublic function scopeOfSimilarPrice($query, $price, $limit = 3)\n{\n return $query->orderBy(DB::raw('ABS(`price` - '.$price.')'))->take($limit);\n}\n
\nNow this functionality is more dynamic and you can use it anywhere and is much easier to use.
\nNow since we already know your product, I actually think lazy-loading is easier to read and understand.
\n// Find your product\n$product = Product::find(1);\n\n// Eager load variants with closest price\n$product->load('variants')->ofSimilarPrice($productPrice);\n\nforeach($product->variants as $variant) {\n echo $variant->details;\n echo $variant->price;\n}\n
\n
soup wrap:
Making Tzook's answer more Laravel friendly.
In your Variant model, add the function.
public function scopeOfSimilarPrice($query, $price, $limit = 3)
{
return $query->orderBy(DB::raw('ABS(`price` - '.$price.')'))->take($limit);
}
Now this functionality is more dynamic and you can use it anywhere and is much easier to use.
Now since we already know your product, I actually think lazy-loading is easier to read and understand.
// Find your product
$product = Product::find(1);
// Eager load variants with closest price
$product->load('variants')->ofSimilarPrice($productPrice);
foreach($product->variants as $variant) {
echo $variant->details;
echo $variant->price;
}
qid & accept id:
(22180392, 22180851)
query:
How to get results for distinct values using sql in oracle
soup:
You can get the results you seem to want using aggregation:
\nselect max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date\nfrom monitor_alert_instance \nwhere description in (select description \n from monitor_alert_instance\n where co_mod_asset_id = 1223\n )\ngroup by description;\n
\nNote that I simplified the subquery. The distinct is redundant when using group by. And neither is necessarily when using in.
\nEDIT:
\nI think you can get the same result with this query:
\nselect max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date\nfrom monitor_alert_instance \ngroup by description\nhaving max(case when co_mod_asset_id = 1223 then 1 else 0 end) = 1;\n
\nThe having clause makes sure that the description is for asset 1223.
\nWhich performs better depends on a number of factors, but this might perform better than the in version. (Or the table may be small enough that any difference in performance is negligible.)
\n
soup wrap:
You can get the results you seem to want using aggregation:
select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date
from monitor_alert_instance
where description in (select description
from monitor_alert_instance
where co_mod_asset_id = 1223
)
group by description;
Note that I simplified the subquery. The distinct is redundant when using group by. And neither is necessarily when using in.
EDIT:
I think you can get the same result with this query:
select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date
from monitor_alert_instance
group by description
having max(case when co_mod_asset_id = 1223 then 1 else 0 end) = 1;
The having clause makes sure that the description is for asset 1223.
Which performs better depends on a number of factors, but this might perform better than the in version. (Or the table may be small enough that any difference in performance is negligible.)
qid & accept id:
(22184025, 22184098)
query:
using a single query to eliminate N+1 select issue
soup:
The simple way to do this in Postgres uses distinct on:
\nselect distinct on (unit_id) r.*\nfrom reports r\norder by unit_id, time desc;\n
\nThis construct is specific to Postgres and databases that use its code base. It the expression distinct on (unit_id) says "I want to keep only one row for each unit_id". The row chosen is the first row encountered with that unit_id based on the order by clause.
\nEDIT:
\nYour original query would be, assuming that id increases along with the time field:
\nSELECT r.*\nFROM reports r\nWHERE id IN (SELECT max(id)\n FROM reports\n GROUP BY unit_id\n );\n
\nYou might also try this as a not exists:
\nselect r.*\nfrom reports r\nwhere not exists (select 1\n from reports r2\n where r2.unit_id = r.unit_id and\n r2.time > r.time\n );\n
\nI thought the distinct on would perform well. This last version (and maybe the previous) would really benefit from an index on reports(unit_id, time).
\n
soup wrap:
The simple way to do this in Postgres uses distinct on:
select distinct on (unit_id) r.*
from reports r
order by unit_id, time desc;
This construct is specific to Postgres and databases that use its code base. It the expression distinct on (unit_id) says "I want to keep only one row for each unit_id". The row chosen is the first row encountered with that unit_id based on the order by clause.
EDIT:
Your original query would be, assuming that id increases along with the time field:
SELECT r.*
FROM reports r
WHERE id IN (SELECT max(id)
FROM reports
GROUP BY unit_id
);
You might also try this as a not exists:
select r.*
from reports r
where not exists (select 1
from reports r2
where r2.unit_id = r.unit_id and
r2.time > r.time
);
I thought the distinct on would perform well. This last version (and maybe the previous) would really benefit from an index on reports(unit_id, time).
qid & accept id:
(22205060, 22205992)
query:
Insert based on another column's value (Oracle 11g)
soup:
Update table1 \nset Update_time = (case when value_a < 0.1 and Update_time is null then sysdate\n when value_a > 0.1 and Update_time is not null then null\n else Update_time end);\n
\nChange sysdate to your desired value.
\nEDIT:
\nInclude Edit in the merge statement. See the below query (not tested with the real data)\nIn this way we do not run the update on entire table.
\nMerge into table1 t1\nusing table1_staging t1s\non t1.name = t1s.name\nwhen matched then\nupdate t1.value_a = t1s.value_a,\nt1.Update_time = (case when t1s.value_a < 0.1 and t1.Update_time is null then sysdate\n when t1s.value_a > 0.1 and t1.Update_time is not null then null\n else t1.Update_time end)\nwhen not matched then\nINSERT (name, values_a)\n VALUES (t1s.name, t1s.values_a);\n
\n
soup wrap:
Update table1
set Update_time = (case when value_a < 0.1 and Update_time is null then sysdate
when value_a > 0.1 and Update_time is not null then null
else Update_time end);
Change sysdate to your desired value.
EDIT:
Include Edit in the merge statement. See the below query (not tested with the real data)
In this way we do not run the update on entire table.
Merge into table1 t1
using table1_staging t1s
on t1.name = t1s.name
when matched then
update t1.value_a = t1s.value_a,
t1.Update_time = (case when t1s.value_a < 0.1 and t1.Update_time is null then sysdate
when t1s.value_a > 0.1 and t1.Update_time is not null then null
else t1.Update_time end)
when not matched then
INSERT (name, values_a)
VALUES (t1s.name, t1s.values_a);
qid & accept id:
(22228967, 22229203)
query:
Showing all values in Group By with inclusion of CASE
soup:
The way I would go about this is to create your own table of values using a table value constructor:
\nSELECT OldSeverity, NewSeverity\nFROM (VALUES \n ('Critical', 'Critical'),\n ('High', 'Critical'),\n ('Medium', 'Medium'),\n ('Low', 'Medium')\n ) s (OldSeverity, NewSeverity);\n
\nThis gives a table you can select from, then left join to your existing table:
\nSELECT Severity = s.NewSeverity,\n Total = COUNT(t.Severity)\nFROM (VALUES \n ('Critical', 'Critical'),\n ('High', 'Critical'),\n ('Medium', 'Medium'),\n ('Low', 'Medium')\n ) s (OldSeverity, NewSeverity)\n LEFT JOIN #Test t\n ON t.Severity = s.OldSeverity\nGROUP BY s.NewSeverity;\n
\nThis will give the desired results.
\n\n
\nEDIT
\nThe problem you have with the way that you are implimenting the query, is that although you have immediately left joined to DimWorkItem you then inner join to subsequent tables and refer to columns in WorkItem in the where clause, which undoes your left join and turns it back into an inner join. You need to place your whole logic into a subquery, and left join to this:
\nSELECT s.NewSeverity AS 'Severity'\n ,COUNT(WI.microsoft_vsts_common_severity) AS 'Total'\nFROM ( VALUES\n ('Critical','I-High')\n ,('High','I-High')\n ,('Medium','I-Low')\n ,('Low','I-Low')\n )s(OldSeverity,NewSeverity)\n LEFT JOIN \n ( SELECT wi.Severity\n FROM DimWorkItem WI (NOLOCK) \n JOIN dbo.DimPerson P \n ON p.personsk = WI.system_assignedto__personsk \n JOIN DimTeamProject TP \n ON WI.TeamProjectSK = TP.ProjectNodeSK \n JOIN DimIteration Itr (NOLOCK) \n ON Itr.IterationSK = WI.IterationSK \n JOIN DimArea Ar (NOLOCK) \n ON Ar.AreaSK = WI.AreaSK \n WHERE TP.ProjectNodeName = 'ABC' \n AND WI.System_WorkItemType = 'Bug' \n AND WI.Microsoft_VSTS_CMMI_RootCause <> 'Change Request' \n AND Itr.IterationPath LIKE '%\ABC\R1234\Test\IT%' \n AND WI.System_State NOT IN ( 'Rejected', 'Closed' ) \n AND WI.System_RevisedDate = CONVERT(datetime, '9999', 126) \n ) WI\n ON WI.Severity = s.OldSeverity \nGROUP BY s.NewSeverity;\n
\n
soup wrap:
The way I would go about this is to create your own table of values using a table value constructor:
SELECT OldSeverity, NewSeverity
FROM (VALUES
('Critical', 'Critical'),
('High', 'Critical'),
('Medium', 'Medium'),
('Low', 'Medium')
) s (OldSeverity, NewSeverity);
This gives a table you can select from, then left join to your existing table:
SELECT Severity = s.NewSeverity,
Total = COUNT(t.Severity)
FROM (VALUES
('Critical', 'Critical'),
('High', 'Critical'),
('Medium', 'Medium'),
('Low', 'Medium')
) s (OldSeverity, NewSeverity)
LEFT JOIN #Test t
ON t.Severity = s.OldSeverity
GROUP BY s.NewSeverity;
This will give the desired results.
EDIT
The problem you have with the way that you are implimenting the query, is that although you have immediately left joined to DimWorkItem you then inner join to subsequent tables and refer to columns in WorkItem in the where clause, which undoes your left join and turns it back into an inner join. You need to place your whole logic into a subquery, and left join to this:
SELECT s.NewSeverity AS 'Severity'
,COUNT(WI.microsoft_vsts_common_severity) AS 'Total'
FROM ( VALUES
('Critical','I-High')
,('High','I-High')
,('Medium','I-Low')
,('Low','I-Low')
)s(OldSeverity,NewSeverity)
LEFT JOIN
( SELECT wi.Severity
FROM DimWorkItem WI (NOLOCK)
JOIN dbo.DimPerson P
ON p.personsk = WI.system_assignedto__personsk
JOIN DimTeamProject TP
ON WI.TeamProjectSK = TP.ProjectNodeSK
JOIN DimIteration Itr (NOLOCK)
ON Itr.IterationSK = WI.IterationSK
JOIN DimArea Ar (NOLOCK)
ON Ar.AreaSK = WI.AreaSK
WHERE TP.ProjectNodeName = 'ABC'
AND WI.System_WorkItemType = 'Bug'
AND WI.Microsoft_VSTS_CMMI_RootCause <> 'Change Request'
AND Itr.IterationPath LIKE '%\ABC\R1234\Test\IT%'
AND WI.System_State NOT IN ( 'Rejected', 'Closed' )
AND WI.System_RevisedDate = CONVERT(datetime, '9999', 126)
) WI
ON WI.Severity = s.OldSeverity
GROUP BY s.NewSeverity;
qid & accept id:
(22232282, 22232897)
query:
Select rows until condition met
soup:
Use a sub-query to find out at what point you should stop, then return all row from your starting point to the calculated stop point.
\nSELECT\n *\nFROM\n yourTable\nWHERE\n id >= 4\n AND id <= (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4)\n
\nNote, this assumes that the last record is always an 'F'. You can deal with the last record being a 'T' using a COALESCE.
\nSELECT\n *\nFROM\n yourTable\nWHERE\n id >= 4\n AND id <= COALESCE(\n (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4),\n (SELECT MAX(id) FROM yourTable )\n )\n
\n
soup wrap:
Use a sub-query to find out at what point you should stop, then return all row from your starting point to the calculated stop point.
SELECT
*
FROM
yourTable
WHERE
id >= 4
AND id <= (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4)
Note, this assumes that the last record is always an 'F'. You can deal with the last record being a 'T' using a COALESCE.
SELECT
*
FROM
yourTable
WHERE
id >= 4
AND id <= COALESCE(
(SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4),
(SELECT MAX(id) FROM yourTable )
)
qid & accept id:
(22258390, 22258531)
query:
Concatenate string with real table SQL SERVER
soup:
Try this
\nselect * \n from Table1 a\n join Table2 b on a.Col1=case @nivel\n when 1 then b.Col1\n when 2 then b.Col2\n when 3 then b.Col3\n ...\n end\n
\nhowever, this is extremely bad design. You should consider redesigning your Table2 to contain something like
\n| ColNo | ColumnData\n| 1 | Data of column 1\n| 2 | Data of column 2\n| 3 | Data of column 3\n
\nthen your query will be more straightforward
\nselect * \n from Table1 a\n join Table2 b\n on a.Col1 = b.ColumnData \n and b.ColNo = @nivel\n
\n
soup wrap:
Try this
select *
from Table1 a
join Table2 b on a.Col1=case @nivel
when 1 then b.Col1
when 2 then b.Col2
when 3 then b.Col3
...
end
however, this is extremely bad design. You should consider redesigning your Table2 to contain something like
| ColNo | ColumnData
| 1 | Data of column 1
| 2 | Data of column 2
| 3 | Data of column 3
then your query will be more straightforward
select *
from Table1 a
join Table2 b
on a.Col1 = b.ColumnData
and b.ColNo = @nivel
qid & accept id:
(22311646, 22311834)
query:
How do I combine two LEFT JOINS without getting crossover?
soup:
You just need to add distinct to the counts
\nSELECT u.*, COUNT(DISTINCT q.id), COUNT(DISTINCT a.id)\n FROM users u\n LEFT JOIN questions q ON u.id = q.author_id\n LEFT JOIN answers a ON u.id = a.author_id\n GROUP BY u.id\n
\nHere's a demo of it in action using Data.SE
\nAlternatively you can use inline views in the from clause
\nSELECT u.*, q.QuestionCount, a.AnswerCount\nFROM users u \n LEFT JOIN (SELECT Count(id) QuestionCount, \n author_id \n FROM questions \n GROUP BY author_id) q \n ON u.id = q.author_id \n LEFT JOIN (SELECT Count(id) AnswerCount, \n author_id \n FROM answers \n GROUP BY author_id) a \n ON u.id = q.author_id \n
\n\n
soup wrap:
You just need to add distinct to the counts
SELECT u.*, COUNT(DISTINCT q.id), COUNT(DISTINCT a.id)
FROM users u
LEFT JOIN questions q ON u.id = q.author_id
LEFT JOIN answers a ON u.id = a.author_id
GROUP BY u.id
Here's a demo of it in action using Data.SE
Alternatively you can use inline views in the from clause
SELECT u.*, q.QuestionCount, a.AnswerCount
FROM users u
LEFT JOIN (SELECT Count(id) QuestionCount,
author_id
FROM questions
GROUP BY author_id) q
ON u.id = q.author_id
LEFT JOIN (SELECT Count(id) AnswerCount,
author_id
FROM answers
GROUP BY author_id) a
ON u.id = q.author_id
qid & accept id:
(22348948, 22349154)
query:
SQL: How to extract data from one column as different columns, according to different condition?
soup:
If I understand correctly, you want to "pivot" the data. In SQLite, one way to do this by using group by:
\nselect AP_idx,\n max(case when RF_idx = 0 then Channel end) as ChannelA,\n max(case when RF_idx = 1 then Channel end) as ChannelB\nfrom table t\ngroup by AP_idx;\n
\nAnother way is by using join:
\nselect ta.AP_idx, ta.channel as ChannelA, tb.channel as ChannelB\nfrom table ta join\n table tb\n on ta.AP_idx = tb.AP_idx and\n ta.RF_idx = 0 and\n tb.RF_idx = 1;\n
\nThis might have better performance with the right indexes. On the other hand, the aggregation method is safer if some of the channel values are missing.
\n
soup wrap:
If I understand correctly, you want to "pivot" the data. In SQLite, one way to do this by using group by:
select AP_idx,
max(case when RF_idx = 0 then Channel end) as ChannelA,
max(case when RF_idx = 1 then Channel end) as ChannelB
from table t
group by AP_idx;
Another way is by using join:
select ta.AP_idx, ta.channel as ChannelA, tb.channel as ChannelB
from table ta join
table tb
on ta.AP_idx = tb.AP_idx and
ta.RF_idx = 0 and
tb.RF_idx = 1;
This might have better performance with the right indexes. On the other hand, the aggregation method is safer if some of the channel values are missing.
qid & accept id:
(22366810, 22368044)
query:
Calculating a field based on totals from queries in MS Access 2010
soup:
Try using multiple queries as individual reports and as data sources.
\nSuppose your tables looks like this...
\ntblSurveys:
\nemployeeid score\n---------- -----\n1 10\n2 3\n2 2\n3 7\n\netc...\n
\ntblEmployees:
\nemployeeid EmployeeName SupervisorId \n---------- ------------- ------------ \n1 Employee 1 1 \n\netc...\n
\ntblSupervisors:
\nSuperVisorId SuperVisorName RegManagerId\n------------ -------------- -------------\n1 Super 1 1\n2 Super 2 1\n\netc...\n
\ntblRegManagers:
\nRegManagerId RegManagerName\n------------- -----------------\n1 Regional Manager 1\n2 Regional Manager 2\n\netc...\n
\nYou may be able to create multipurpose queries. See SQL below...
\nQuery1: This gives you the employee stats
\nselect SupervisorName,RegManagerId,EmployeeName,\n Promoter,Detractor,surveys,Promoter-Detractor AS score,\n (Promoter-Detractor)/surveys as result \n from \n ( \n select a.EmployeeName,b.SupervisorName, b.RegManagerId,\n (select count(*) from tblSurveys where \n employeeid=a.employeeid and score<7) as Detractor,\n (select count(*) from tblSurveys where \n employeeid=a.employeeid and score>6) as Promoter,\n (select count(*) from tblSurveys where employeeid=a.employeeid) as surveys \n from tblEmployees a left join tblSupervisors b on a.supervisorid=b.supervisorid\n ) \n
\nQuery2: This gives you the supervisor stats but also uses employee stats (Query1)
\nselect supervisorname,RegManagerId, \n promotersum, detractorsum, surveyssum,(promotersum-detractorsum)/surveyssum \n from \n (select SuperVisorName,RegManagerId, sum(Promoter) as PromoterSum, \n sum(Detractor) as DetractorSum, \n sum(surveys) as surveyssum from query1 group by SuperVisorName,RegManagerId )\n
\nQuery3: This gives you Regional Manager stats but also uses supervisor stats (Query2)
\nselect RegManagerName, promoter_cnt, detractor_cnt, survey_cnt, promoter_cnt-detractor_cnt as score, \n (promoter_cnt-detractor_cnt)/survey_cnt as result \n from \n (select a.RegManagerName, b.RegManagerId, sum(b.promotersum) as promoter_cnt, \n sum(b.detractorsum) as detractor_cnt, sum(b.surveyssum) as survey_cnt \n from tblRegManagers a left join query2 b on a.RegManagerId=b.RegManagerId \n group by a.RegManagerName, b.RegManagerId) \n
\nSo, while each query serves as a report by themselves, the first two are used as source queries.
\n
soup wrap:
Try using multiple queries as individual reports and as data sources.
Suppose your tables looks like this...
tblSurveys:
employeeid score
---------- -----
1 10
2 3
2 2
3 7
etc...
tblEmployees:
employeeid EmployeeName SupervisorId
---------- ------------- ------------
1 Employee 1 1
etc...
tblSupervisors:
SuperVisorId SuperVisorName RegManagerId
------------ -------------- -------------
1 Super 1 1
2 Super 2 1
etc...
tblRegManagers:
RegManagerId RegManagerName
------------- -----------------
1 Regional Manager 1
2 Regional Manager 2
etc...
You may be able to create multipurpose queries. See SQL below...
Query1: This gives you the employee stats
select SupervisorName,RegManagerId,EmployeeName,
Promoter,Detractor,surveys,Promoter-Detractor AS score,
(Promoter-Detractor)/surveys as result
from
(
select a.EmployeeName,b.SupervisorName, b.RegManagerId,
(select count(*) from tblSurveys where
employeeid=a.employeeid and score<7) as Detractor,
(select count(*) from tblSurveys where
employeeid=a.employeeid and score>6) as Promoter,
(select count(*) from tblSurveys where employeeid=a.employeeid) as surveys
from tblEmployees a left join tblSupervisors b on a.supervisorid=b.supervisorid
)
Query2: This gives you the supervisor stats but also uses employee stats (Query1)
select supervisorname,RegManagerId,
promotersum, detractorsum, surveyssum,(promotersum-detractorsum)/surveyssum
from
(select SuperVisorName,RegManagerId, sum(Promoter) as PromoterSum,
sum(Detractor) as DetractorSum,
sum(surveys) as surveyssum from query1 group by SuperVisorName,RegManagerId )
Query3: This gives you Regional Manager stats but also uses supervisor stats (Query2)
select RegManagerName, promoter_cnt, detractor_cnt, survey_cnt, promoter_cnt-detractor_cnt as score,
(promoter_cnt-detractor_cnt)/survey_cnt as result
from
(select a.RegManagerName, b.RegManagerId, sum(b.promotersum) as promoter_cnt,
sum(b.detractorsum) as detractor_cnt, sum(b.surveyssum) as survey_cnt
from tblRegManagers a left join query2 b on a.RegManagerId=b.RegManagerId
group by a.RegManagerName, b.RegManagerId)
So, while each query serves as a report by themselves, the first two are used as source queries.
qid & accept id:
(22390896, 22391177)
query:
mysql query for give array in the date not into interval
soup:
If I understand this problem correctly, what you want is \n
\nthat the startdate or end date should not fall in the interval of s_adate and s_ddate.
\n Try this:\n\n select * from table where ($datestart NOT BETWEEN s_adate and s_ddate) OR($enddate NOT BETWEEN s_adate and s_ddate);\n
\n
soup wrap:
If I understand this problem correctly, what you want is
that the startdate or end date should not fall in the interval of s_adate and s_ddate.
Try this:
select * from table where ($datestart NOT BETWEEN s_adate and s_ddate) OR($enddate NOT BETWEEN s_adate and s_ddate);
qid & accept id:
(22393469, 22393561)
query:
Insert into a colum the month/day/currentyear that is the same month/day as a previous column
soup:
You can use the dateadd and getdate functions to generate the dates you want. Try something like this to test it:
\ndeclare @d1 date\nset @d1 = '02/01/2007'\n\nselect \n @d1 as d1, \n dateadd(YEAR, year(getdate())-year(@d1), @d1) as d2, \n dateadd(day, 59, dateadd(YEAR, year(getdate())-year(@d1), @d1)) as d3\n
\nThis would return:
\nd1 d2 d3\n---------- ---------- ----------\n2007-02-01 2014-02-01 2014-04-01\n
\nYou might have to fine-tune the parameters to dateadd to get exactly what you want.
\nTo adapt it to an update statement you would do something like:
\nupdate myTable\n set date 2 = dateadd(YEAR, year(getdate())-year(date1), date1) , \n date3 = dateadd(day, 59, dateadd(YEAR, year(getdate())-year(date1), date1)) \n
\n
soup wrap:
You can use the dateadd and getdate functions to generate the dates you want. Try something like this to test it:
declare @d1 date
set @d1 = '02/01/2007'
select
@d1 as d1,
dateadd(YEAR, year(getdate())-year(@d1), @d1) as d2,
dateadd(day, 59, dateadd(YEAR, year(getdate())-year(@d1), @d1)) as d3
This would return:
d1 d2 d3
---------- ---------- ----------
2007-02-01 2014-02-01 2014-04-01
You might have to fine-tune the parameters to dateadd to get exactly what you want.
To adapt it to an update statement you would do something like:
update myTable
set date 2 = dateadd(YEAR, year(getdate())-year(date1), date1) ,
date3 = dateadd(day, 59, dateadd(YEAR, year(getdate())-year(date1), date1))
qid & accept id:
(22399836, 22399908)
query:
SQL Query, latest rows for each unique duo
soup:
SELECT the MAXimum of modification_date for each GROUP of (A, B), then JOIN back to the original row to get the values (necessary to get the id column):
\nSELECT t1.*\nFROM Person t1\nJOIN\n(\n SELECT MAX(modification_date) max_date, A, B\n FROM Person\n GROUP BY A, B\n) t2 ON t1.A = t2.A AND t1.B = t2.B AND t1.modification_date = t2.max_date\n
\n
\nMore simply, if you don't care which id you get back, and you only want one row even if modification_date is duplicated, you can just select the MINimum value of id and be done with it:
\nSELECT MIN(id) id, A, B, MAX(modification_date) modification_date\nFROM Person\nGROUP BY A, B\n
\n
soup wrap:
SELECT the MAXimum of modification_date for each GROUP of (A, B), then JOIN back to the original row to get the values (necessary to get the id column):
SELECT t1.*
FROM Person t1
JOIN
(
SELECT MAX(modification_date) max_date, A, B
FROM Person
GROUP BY A, B
) t2 ON t1.A = t2.A AND t1.B = t2.B AND t1.modification_date = t2.max_date
More simply, if you don't care which id you get back, and you only want one row even if modification_date is duplicated, you can just select the MINimum value of id and be done with it:
SELECT MIN(id) id, A, B, MAX(modification_date) modification_date
FROM Person
GROUP BY A, B
qid & accept id:
(22452123, 22454653)
query:
How to conditionally adjust date on subsequent rows
soup:
The following query should return what you want:
\nWITH T1 AS\n( \n\n SELECT * \n , ROW_NUMBER() OVER (PARTITION BY propertyid, isprimary ORDER BY date) AS PropNo\n , COUNT(*) OVER (PARTITION BY propertyid, isprimary) AS PropCount\n FROM \n -- Replace below with your source data table\n (VALUES(1,'Bathroom condition',1,'2014-04-01')\n ,(1,'External wall finish',0,'2014-04-01') \n ,(1,'Chimney stacks',0,'2015-04-01') \n ,(1,'Principal roof covering',0,'2016-04-01') \n ,(2,'Damp proof course',0,'2016-04-01')) T(propertyid, text, isprimary, date)\n)\nSELECT \n T1.propertyid\n , T1.text\n , T1.isprimary\n , CASE \n WHEN T1.isprimary = 1 OR T1.PropNo = T1.PropCount - 1 THEN T1.date\n ELSE ISNULL(T1Next.date, T1.date) END AS [date]\nFROM T1 \nLEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid \n AND T1.isprimary = T1Next.isprimary\n AND T1.PropNo = T1Next.PropNo - 1\nWHERE T1.isprimary = 1\n OR (T1.PropNo < T1.PropCount)\n
\nI use the ROW_NUMBER() and COUNT(*) function to determine when there are subsequent rows. To apply the date from the subsequent row, I use a LEFT JOIN.
\nEDIT\nChanging the left join to this ensures that the join only occurs on secondary elements and only every second element:
\nLEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid \n AND T1.isprimary = 0\n AND T1Next.isprimary = 0\n AND T1.PropNo = T1Next.PropNo - 1\n AND T1Next.PropNo % 2 = 0\n
\nThat means we don't need the case statement, just this:
\nISNULL(T1Next.date, T1.date) AS [date]\n
\nBut the where statement is not quite right. This works:
\nWHERE T1.isprimary = 1\n OR (T1.PropNo % 2 = 0) --every 2nd one\n OR T1Next.date IS NOT NULL --and the 1st if there is a 2nd\n
\n
soup wrap:
The following query should return what you want:
WITH T1 AS
(
SELECT *
, ROW_NUMBER() OVER (PARTITION BY propertyid, isprimary ORDER BY date) AS PropNo
, COUNT(*) OVER (PARTITION BY propertyid, isprimary) AS PropCount
FROM
-- Replace below with your source data table
(VALUES(1,'Bathroom condition',1,'2014-04-01')
,(1,'External wall finish',0,'2014-04-01')
,(1,'Chimney stacks',0,'2015-04-01')
,(1,'Principal roof covering',0,'2016-04-01')
,(2,'Damp proof course',0,'2016-04-01')) T(propertyid, text, isprimary, date)
)
SELECT
T1.propertyid
, T1.text
, T1.isprimary
, CASE
WHEN T1.isprimary = 1 OR T1.PropNo = T1.PropCount - 1 THEN T1.date
ELSE ISNULL(T1Next.date, T1.date) END AS [date]
FROM T1
LEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid
AND T1.isprimary = T1Next.isprimary
AND T1.PropNo = T1Next.PropNo - 1
WHERE T1.isprimary = 1
OR (T1.PropNo < T1.PropCount)
I use the ROW_NUMBER() and COUNT(*) function to determine when there are subsequent rows. To apply the date from the subsequent row, I use a LEFT JOIN.
EDIT
Changing the left join to this ensures that the join only occurs on secondary elements and only every second element:
LEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid
AND T1.isprimary = 0
AND T1Next.isprimary = 0
AND T1.PropNo = T1Next.PropNo - 1
AND T1Next.PropNo % 2 = 0
That means we don't need the case statement, just this:
ISNULL(T1Next.date, T1.date) AS [date]
But the where statement is not quite right. This works:
WHERE T1.isprimary = 1
OR (T1.PropNo % 2 = 0) --every 2nd one
OR T1Next.date IS NOT NULL --and the 1st if there is a 2nd
qid & accept id:
(22468717, 22468815)
query:
How to update duplicated rows with a index (Mysql)
soup:
Try this:
\nupdate city cross join\n (select @city := '', @prevcity := '', @i := 0) const\n set `index` = (case when (@prevcity := @city) is null then null\n when (@city := city) is null then null\n else @i := if(@prevcity = city, @i + 1, 1) is null then null\n end)\n order by city; \n
\nIf you are familiar with the use of variables for enumeration in a select statement, then this is similar. The complication is ensuring the order of evaluation for the update. This is handled by using a case statement, which sequentially evaluates each clause until one is true. The first two are guaranteed to be false (because the values should never be NULL).
\nEDIT:
\nIf you have a unique id, then the solution is a bit easier. I wish you could do this:
\nupdate city c\n set `index` = (select count(*) from city c2 where c2.city = c.city and c2.id <= c.id);\n
\nBut instead, you can do it with more joins:
\nupdate city c join\n (select id, (select count(*) from city c2 where c2.city = c1.city and c2.id <= c1.id) as newind\n from city c1\n ) ci\n on c.id = ci.id\n set c.`index` = ci.newind;\n
\n
soup wrap:
Try this:
update city cross join
(select @city := '', @prevcity := '', @i := 0) const
set `index` = (case when (@prevcity := @city) is null then null
when (@city := city) is null then null
else @i := if(@prevcity = city, @i + 1, 1) is null then null
end)
order by city;
If you are familiar with the use of variables for enumeration in a select statement, then this is similar. The complication is ensuring the order of evaluation for the update. This is handled by using a case statement, which sequentially evaluates each clause until one is true. The first two are guaranteed to be false (because the values should never be NULL).
EDIT:
If you have a unique id, then the solution is a bit easier. I wish you could do this:
update city c
set `index` = (select count(*) from city c2 where c2.city = c.city and c2.id <= c.id);
But instead, you can do it with more joins:
update city c join
(select id, (select count(*) from city c2 where c2.city = c1.city and c2.id <= c1.id) as newind
from city c1
) ci
on c.id = ci.id
set c.`index` = ci.newind;
qid & accept id:
(22512709, 22513571)
query:
In SQL how can I add my Row_Number() to my current subquery in the from clause?
soup:
You should be able to do exactly the same thing (although I cannot imagine what you are trying to accomplish):
\nSelect *\nFrom (\n SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\n FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID\n WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\n Group By u.FirstName, u.LastName, c.UserId\n ) x\nWhere x.rn Between 0 and 25\nOrder By [UserName]\n
\nPersonally, I like doing this kind of thing with CTE's:
\n;with cte as\n(\n SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn\n ,c.UserId\n ,(u.FirstName + ' ' + u.LastName) AS [UserName]\n ,Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\n FROM T.dbo.CompletedCase c\n join T.dbo.User u\n on c.UserId = u.UserID\n WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\n Group By u.FirstName, u.LastName, c.UserId\n)\nSelect UserId\n ,UserName\n ,CompletedCase\nFrom cte\nWhere rn Between 0 And 25\nOrder By [UserName]\n
\nBut, it kind of seems like you just want the first 25 rows, so why not just:
\nSELECT DISTINCT Top 25, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\nFROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID\nWHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\nGroup By u.FirstName, u.LastName, c.UserId\nOrder By [UserName]\n
\n
soup wrap:
You should be able to do exactly the same thing (although I cannot imagine what you are trying to accomplish):
Select *
From (
SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID
WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
Group By u.FirstName, u.LastName, c.UserId
) x
Where x.rn Between 0 and 25
Order By [UserName]
Personally, I like doing this kind of thing with CTE's:
;with cte as
(
SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn
,c.UserId
,(u.FirstName + ' ' + u.LastName) AS [UserName]
,Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
FROM T.dbo.CompletedCase c
join T.dbo.User u
on c.UserId = u.UserID
WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
Group By u.FirstName, u.LastName, c.UserId
)
Select UserId
,UserName
,CompletedCase
From cte
Where rn Between 0 And 25
Order By [UserName]
But, it kind of seems like you just want the first 25 rows, so why not just:
SELECT DISTINCT Top 25, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID
WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
Group By u.FirstName, u.LastName, c.UserId
Order By [UserName]
qid & accept id:
(22537662, 22537922)
query:
transfer the value of a field to variable in SQL Server 2012
soup:
I suspect that you really want the lag() function:
\nselect t.*,\n lag(code) over (order by date) as lastcode\nfrom table t;\n
\nNote that this would be NULL in the first case, because none is defined. You can use ifnull() to assign a value.
\nIn SQL Server, you can use this in an update statement:
\nwith toupdate as (\n select t.*,\n lag(code) over (order by date) as new_lastcode\n from table t\n )\nupdate toupdate\n set lastcode = new_lastcode;\n
\nThis assumes the column already exists in the table.
\n
soup wrap:
I suspect that you really want the lag() function:
select t.*,
lag(code) over (order by date) as lastcode
from table t;
Note that this would be NULL in the first case, because none is defined. You can use ifnull() to assign a value.
In SQL Server, you can use this in an update statement:
with toupdate as (
select t.*,
lag(code) over (order by date) as new_lastcode
from table t
)
update toupdate
set lastcode = new_lastcode;
This assumes the column already exists in the table.
qid & accept id:
(22544486, 22544574)
query:
How to insert multiple rows with one insert statement
soup:
Try this:
\nINSERT INTO tblUsers (State,City,Code)\nSELECT 'IN','Indy', UserCode\nFROM tblAccounts\nWHERE UserCode IN\n (SELECT UserCode\n FROM tblAccounts\n WHERE State = 'IN')\n
\nor better simplified (a subquery is not needed):
\nINSERT INTO tblUsers (State,City,Code)\nSELECT 'IN','Indy', UserCode\nFROM tblAccounts\nWHERE State = 'IN'\n
\n
soup wrap:
Try this:
INSERT INTO tblUsers (State,City,Code)
SELECT 'IN','Indy', UserCode
FROM tblAccounts
WHERE UserCode IN
(SELECT UserCode
FROM tblAccounts
WHERE State = 'IN')
or better simplified (a subquery is not needed):
INSERT INTO tblUsers (State,City,Code)
SELECT 'IN','Indy', UserCode
FROM tblAccounts
WHERE State = 'IN'
qid & accept id:
(22583760, 22584415)
query:
Select Every Date for Date Range and Insert
soup:
I think this should do it (DEMO):
\n;with cte as (\n select\n id\n ,startdate\n ,enddate\n ,value / (1+datediff(day, startdate, enddate)) as value\n ,startdate as date\n from units\n union all\n select id, startdate, enddate, value, date+1 as date\n from cte\n where date < enddate\n)\nselect\n row_number() over (order by date) as ID\n ,date\n ,sum(value) as value\nfrom cte\ngroup by date\n
\nThe idea is to use a Recursive CTE to explode the date ranges into one record per day. Also, the logic of value / (1+datediff(day, startdate, enddate)) distributes the total value evenly over the number of days in each range. Finally, we group by day and sum together all the values corresponding to that day to get the output:
\n| ID | DATE | VALUE |\n|----|---------------------------------|-------|\n| 1 | January, 01 2014 00:00:00+0000 | 11 |\n| 2 | January, 02 2014 00:00:00+0000 | 16 |\n| 3 | January, 03 2014 00:00:00+0000 | 16 |\n| 4 | February, 01 2014 00:00:00+0000 | 10 |\n| 5 | February, 02 2014 00:00:00+0000 | 10 |\n
\nFrom here you can join with your result table (Table B) by date, and update/insert the value as needed. That logic might look something like this (test it first of course before running in production!):
\nupdate B set B.VALUE = R.VALUE from TableB B join Result R on B.DATE = R.DATE\ninsert TableB (DATE, VALUE)\n select DATE, VALUE from Result R where R.DATE not in (select DATE from TableB)\n
\n
soup wrap:
I think this should do it (DEMO):
;with cte as (
select
id
,startdate
,enddate
,value / (1+datediff(day, startdate, enddate)) as value
,startdate as date
from units
union all
select id, startdate, enddate, value, date+1 as date
from cte
where date < enddate
)
select
row_number() over (order by date) as ID
,date
,sum(value) as value
from cte
group by date
The idea is to use a Recursive CTE to explode the date ranges into one record per day. Also, the logic of value / (1+datediff(day, startdate, enddate)) distributes the total value evenly over the number of days in each range. Finally, we group by day and sum together all the values corresponding to that day to get the output:
| ID | DATE | VALUE |
|----|---------------------------------|-------|
| 1 | January, 01 2014 00:00:00+0000 | 11 |
| 2 | January, 02 2014 00:00:00+0000 | 16 |
| 3 | January, 03 2014 00:00:00+0000 | 16 |
| 4 | February, 01 2014 00:00:00+0000 | 10 |
| 5 | February, 02 2014 00:00:00+0000 | 10 |
From here you can join with your result table (Table B) by date, and update/insert the value as needed. That logic might look something like this (test it first of course before running in production!):
update B set B.VALUE = R.VALUE from TableB B join Result R on B.DATE = R.DATE
insert TableB (DATE, VALUE)
select DATE, VALUE from Result R where R.DATE not in (select DATE from TableB)
qid & accept id:
(22622841, 22623091)
query:
SQL Server : Calculate a percentual value in a row out of a sum of multiple rows
soup:
Test Data
\nDECLARE @TABLE TABLE (id INT,name VARCHAR(100),value INT)\nINSERT INTO @TABLE VALUES \n(1,'kermit',100),(2,'piggy',200),(3,'tiffy',300)\n
\nQuery
\n;WITH CTE1\nAS \n (\n SELECT SUM(value) AS Total\n FROM @TABLE\n ),\nCTE2\nAS\n (\n SELECT *\n , CAST(CAST((CAST(Value AS NUMERIC(10,2)) /\n (SELECT CAST(Total AS NUMERIC(10,2)) FROM CTE1)) * 100.00\n AS NUMERIC(4,2)) AS NVARCHAR(10)) + '%' AS [% of sum of matching rows]\n FROM @TABLE\n )\nSELECT * \nFROM CTE2\n
\nResult Set
\n╔════╦════════╦═══════╦═══════════════════════════╗\n║ id ║ name ║ value ║ % of sum of matching rows ║\n╠════╬════════╬═══════╬═══════════════════════════╣\n║ 1 ║ kermit ║ 100 ║ 16.67% ║\n║ 2 ║ piggy ║ 200 ║ 33.33% ║\n║ 3 ║ tiffy ║ 300 ║ 50.00% ║\n╚════╩════════╩═══════╩═══════════════════════════╝\n
\n
soup wrap:
Test Data
DECLARE @TABLE TABLE (id INT,name VARCHAR(100),value INT)
INSERT INTO @TABLE VALUES
(1,'kermit',100),(2,'piggy',200),(3,'tiffy',300)
Query
;WITH CTE1
AS
(
SELECT SUM(value) AS Total
FROM @TABLE
),
CTE2
AS
(
SELECT *
, CAST(CAST((CAST(Value AS NUMERIC(10,2)) /
(SELECT CAST(Total AS NUMERIC(10,2)) FROM CTE1)) * 100.00
AS NUMERIC(4,2)) AS NVARCHAR(10)) + '%' AS [% of sum of matching rows]
FROM @TABLE
)
SELECT *
FROM CTE2
Result Set
╔════╦════════╦═══════╦═══════════════════════════╗
║ id ║ name ║ value ║ % of sum of matching rows ║
╠════╬════════╬═══════╬═══════════════════════════╣
║ 1 ║ kermit ║ 100 ║ 16.67% ║
║ 2 ║ piggy ║ 200 ║ 33.33% ║
║ 3 ║ tiffy ║ 300 ║ 50.00% ║
╚════╩════════╩═══════╩═══════════════════════════╝
qid & accept id:
(22629022, 22629198)
query:
Inserting a row at the specific place in SQLite database
soup:
You shouldn't care about key values, just append your row at the end.
\nIf you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87
\nMake room for the key
\nupdate mytable\nset key = key + 1\nwhere key >= 87\n
\nInsert your row
\ninsert into mytable ...\n
\nAnd finally update the key for the new row
\nupdate mytable\nset key = 87\nwhere key = NEW_ROW_KEY\n
\n
soup wrap:
You shouldn't care about key values, just append your row at the end.
If you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87
Make room for the key
update mytable
set key = key + 1
where key >= 87
Insert your row
insert into mytable ...
And finally update the key for the new row
update mytable
set key = 87
where key = NEW_ROW_KEY
qid & accept id:
(22655631, 22655782)
query:
Normalize comma separated foreign key
soup:
The general idea is to split the comma separated field into a set using regexp_split_to_table, cast each value to integer, and pair the results up with the element_id from the tuple we got the original comma separated field from.
\nFor PostgreSQL 9.3, you'd write:
\nINSERT INTO element_authors(element_id, author_id)\nSELECT\n element_id,\n CAST (author_id AS integer) AS author_id\nFROM\n element,\n LATERAL regexp_split_to_table(nullif(authors, ''), ',') author_id;\n
\nor on older PostgreSQL versions I think in this case it's safe to write:
\nINSERT INTO element_authors(element_id, author_id)\nSELECT\n element_id,\n CAST( regexp_split_to_table(nullif(authors, ''), ',') AS integer) AS author_id\nFROM\n element;\n
\n
soup wrap:
The general idea is to split the comma separated field into a set using regexp_split_to_table, cast each value to integer, and pair the results up with the element_id from the tuple we got the original comma separated field from.
For PostgreSQL 9.3, you'd write:
INSERT INTO element_authors(element_id, author_id)
SELECT
element_id,
CAST (author_id AS integer) AS author_id
FROM
element,
LATERAL regexp_split_to_table(nullif(authors, ''), ',') author_id;
or on older PostgreSQL versions I think in this case it's safe to write:
INSERT INTO element_authors(element_id, author_id)
SELECT
element_id,
CAST( regexp_split_to_table(nullif(authors, ''), ',') AS integer) AS author_id
FROM
element;
qid & accept id:
(22668248, 22706174)
query:
Using array of Records in 'IN' operator in Oracle
soup:
Leveraging Oracle Collections to Build Array-typed Solutions
\nThe answer to your question is YES, dimensioned variables such as ARRAYS and COLLECTIONS are viable data types in solving problems where there are multiple values in either or both the input and output values.
\n\nAdditional good news is that the discussion for a simple example (such as the one in the OP) is pretty much the same as for a complex one. Solutions built with arrays are nicely scalable and dynamic if designed with a little advanced planning.
\n
\nSome Up Front Design Decisions
\n\nThere are actual collection types called ARRAYS and ASSOCIATIVE ARRAYS. I chose to use NESTED TABLE TYPES because of their accessibility to direct SQL queries. In some ways, they exhibit "array-like" behavior. There are other trade-offs which can be researched through Oracle references.
\nThe query applied to search the COURSE TABLE would apply a JOIN condition instead of an IN-LIST approach.
\nThe use of a STORED PROCEDURE typed object improves database response. Queries within the procedure call can leverage and reuse already compiled code plus their cached execution plans.
\n
\nChoosing the Right Collection or Array Type
\nThere are a lot of choices of collection types in Oracle for storing variables into memory. Each has an advantage and some sort of limitation. AskTom from Oracle has a good example and break-down of what a developer can expect by choosing one variable collection type over another.
\nUsing NESTED TABLE Types for Managing Multiple Valued Variables
\nFor this solution, I chose to work with NESTED TABLES because of their ability to be accessed directly through SQL commands. After trying several different approaches, I noticed that the plain-SQL accessibility leads to more clarity in the resulting code.
\nThe down-side is that you will notice that there is a little overhead here and there with respect to declaring an instance of a nested table type, initializing each instance, and managing its size with the addition of new values.
\n\nIn any case, if you anticipate a unknown number of input variables or values (our output), an array-typed data type (collection) of any sort is a more flexible structure for your code. It is likely to require less maintenance in the end.
\n
\nThe Example: A Stored Procedure Search Query
\nCustom TYPE Definitions
\n CREATE OR REPLACE TYPE "COURSE_REC_TYPE" IS OBJECT (DEPID NUMBER(10,0), COURSE VARCHAR2(10));\n\n CREATE OR REPLACE TYPE "COURSE_TBL_TYPE" IS TABLE of course_rec_type;\n
\nPROCEDURE Source Code
\n create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH IS\n\n my_input course_tbl_type:= course_tbl_type();\n my_output course_tbl_type:= course_tbl_type();\n cur_loop_counter pls_integer;\n\n c_output_template constant varchar2(100):=\n 'DEPID: <>, COURSE: <>';\n v_output VARCHAR2(200);\n\n CURSOR find_course_cur IS \n SELECT crs.depid, crs.course\n FROM zz_course crs,\n (SELECT depid, course\n FROM TABLE (CAST (my_input AS course_tbl_type))\n ) search_values\n WHERE crs.depid = search_values.depid\n AND crs.course = search_values.course;\n\n BEGIN\n my_input.extend(2);\n my_input(1):= course_rec_type(1, 'A');\n my_input(2):= course_rec_type(4, 'D');\n\n cur_loop_counter:= 0;\n for i in find_course_cur\n loop\n cur_loop_counter:= cur_loop_counter + 1;\n my_output.extend;\n my_output(cur_loop_counter):= course_rec_type(i.depid, i.course);\n\n end loop;\n\n for j in my_output.first .. my_output.last\n loop\n v_output:= replace(c_output_template, '<>', to_char(my_output(j).depid));\n v_output:= replace(v_output, '<>', my_output(j).course);\n\n dbms_output.put_line(v_output);\n\n end loop;\n\n end ZZ_PROC_COURSE_SEARCH;\n
\nProcedure OUTPUT:
\n DEPID: 1, COURSE: A\n DEPID: 4, COURSE: D\n\n Statement processed.\n\n\n 0.03 seconds\n
\nMY COMMENTS: I wasn't particularly satisfied with the way the input variables were stored. There was a clumsy kind of problem with "loading" values into the nested table structure... If you can consider using a single search key instead of a composite pair (i.e., depid and course), the problem condenses to a simpler form.
\nRevised Cursor Using a Single Search Value
\nThis is the proposed modification to the table design of the OP. Add a single unique key id column (RecId) to represent each unique combination of DepId and Course.
\n
\nNote that the RecId column represents a SURROGATE KEY which should have no internal meaning aside from its property as a uniquely assigned value.
\nCustom TYPE Definitions
\n CREATE OR REPLACE TYPE "NUM_TBL_TYPE" IS TABLE of INTEGER;\n
\nRemove Array Variable
\nThis will be passed directly through an input parameter from the procedure call.
\n -- REMOVE\n my_input course_tbl_type:= course_tbl_type();\n
\nLoading and Presenting INPUT Parameter Array (Nested Table)
\nThe following can be removed from the main procedure and presented as part of the call to the procedure.
\n BEGIN\n my_input.extend(2);\n my_input(1):= course_rec_type(1, 'A');\n my_input(2):= course_rec_type(4, 'D');\n
\nBecomes:
\n create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH (p_search_ids IN num_tbl_type) IS...\n
\nand
\n my_external_input.extend(2);\n my_external_input:= num_tbl_type(1, 4);\n
\nChanging the Internal Cursor Definition
\nThe cursor looks about the same. You can just as easily use an IN-LIST now that there is only one search parameter.
\n CURSOR find_course_cur IS \n SELECT crs.depid, crs.course\n FROM zz_course_new crs, \n (SELECT column_value as recid\n FROM TABLE (CAST (p_search_ids AS num_tbl_type))\n ) search_values\n WHERE crs.recid = search_values.recid;\n
\nThe Actual SEARCH Call and Output
\nThe searching portion of this operation is now isolated and dynamic. It does not need to be changed. All the Changes happen in the calling PL/SQL block where the search ID values are a lot easier to read and change.
\n DECLARE\n my_input_external num_tbl_type:= num_tbl_type();\n\n BEGIN\n my_input_external.extend(3);\n my_input_external:= num_tbl_type(1,3,22);\n\n ZZ_PROC_COURSE_SEARCH (p_search_ids => my_input_external);\n\n END; \n\n\n -- The OUTPUT (Currently set to DBMS_OUT)\n\n\n DEPID: 1, COURSE: A\n DEPID: 4, COURSE: D\n DEPID: 7, COURSE: G\n\n Statement processed.\n\n 0.01 seconds\n
\n
soup wrap:
Leveraging Oracle Collections to Build Array-typed Solutions
The answer to your question is YES, dimensioned variables such as ARRAYS and COLLECTIONS are viable data types in solving problems where there are multiple values in either or both the input and output values.
Additional good news is that the discussion for a simple example (such as the one in the OP) is pretty much the same as for a complex one. Solutions built with arrays are nicely scalable and dynamic if designed with a little advanced planning.
Some Up Front Design Decisions
There are actual collection types called ARRAYS and ASSOCIATIVE ARRAYS. I chose to use NESTED TABLE TYPES because of their accessibility to direct SQL queries. In some ways, they exhibit "array-like" behavior. There are other trade-offs which can be researched through Oracle references.
The query applied to search the COURSE TABLE would apply a JOIN condition instead of an IN-LIST approach.
The use of a STORED PROCEDURE typed object improves database response. Queries within the procedure call can leverage and reuse already compiled code plus their cached execution plans.
Choosing the Right Collection or Array Type
There are a lot of choices of collection types in Oracle for storing variables into memory. Each has an advantage and some sort of limitation. AskTom from Oracle has a good example and break-down of what a developer can expect by choosing one variable collection type over another.
Using NESTED TABLE Types for Managing Multiple Valued Variables
For this solution, I chose to work with NESTED TABLES because of their ability to be accessed directly through SQL commands. After trying several different approaches, I noticed that the plain-SQL accessibility leads to more clarity in the resulting code.
The down-side is that you will notice that there is a little overhead here and there with respect to declaring an instance of a nested table type, initializing each instance, and managing its size with the addition of new values.
In any case, if you anticipate a unknown number of input variables or values (our output), an array-typed data type (collection) of any sort is a more flexible structure for your code. It is likely to require less maintenance in the end.
The Example: A Stored Procedure Search Query
Custom TYPE Definitions
CREATE OR REPLACE TYPE "COURSE_REC_TYPE" IS OBJECT (DEPID NUMBER(10,0), COURSE VARCHAR2(10));
CREATE OR REPLACE TYPE "COURSE_TBL_TYPE" IS TABLE of course_rec_type;
PROCEDURE Source Code
create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH IS
my_input course_tbl_type:= course_tbl_type();
my_output course_tbl_type:= course_tbl_type();
cur_loop_counter pls_integer;
c_output_template constant varchar2(100):=
'DEPID: <>, COURSE: <>';
v_output VARCHAR2(200);
CURSOR find_course_cur IS
SELECT crs.depid, crs.course
FROM zz_course crs,
(SELECT depid, course
FROM TABLE (CAST (my_input AS course_tbl_type))
) search_values
WHERE crs.depid = search_values.depid
AND crs.course = search_values.course;
BEGIN
my_input.extend(2);
my_input(1):= course_rec_type(1, 'A');
my_input(2):= course_rec_type(4, 'D');
cur_loop_counter:= 0;
for i in find_course_cur
loop
cur_loop_counter:= cur_loop_counter + 1;
my_output.extend;
my_output(cur_loop_counter):= course_rec_type(i.depid, i.course);
end loop;
for j in my_output.first .. my_output.last
loop
v_output:= replace(c_output_template, '<>', to_char(my_output(j).depid));
v_output:= replace(v_output, '<>', my_output(j).course);
dbms_output.put_line(v_output);
end loop;
end ZZ_PROC_COURSE_SEARCH;
Procedure OUTPUT:
DEPID: 1, COURSE: A
DEPID: 4, COURSE: D
Statement processed.
0.03 seconds
MY COMMENTS: I wasn't particularly satisfied with the way the input variables were stored. There was a clumsy kind of problem with "loading" values into the nested table structure... If you can consider using a single search key instead of a composite pair (i.e., depid and course), the problem condenses to a simpler form.
Revised Cursor Using a Single Search Value
This is the proposed modification to the table design of the OP. Add a single unique key id column (RecId) to represent each unique combination of DepId and Course.

Note that the RecId column represents a SURROGATE KEY which should have no internal meaning aside from its property as a uniquely assigned value.
Custom TYPE Definitions
CREATE OR REPLACE TYPE "NUM_TBL_TYPE" IS TABLE of INTEGER;
Remove Array Variable
This will be passed directly through an input parameter from the procedure call.
-- REMOVE
my_input course_tbl_type:= course_tbl_type();
Loading and Presenting INPUT Parameter Array (Nested Table)
The following can be removed from the main procedure and presented as part of the call to the procedure.
BEGIN
my_input.extend(2);
my_input(1):= course_rec_type(1, 'A');
my_input(2):= course_rec_type(4, 'D');
Becomes:
create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH (p_search_ids IN num_tbl_type) IS...
and
my_external_input.extend(2);
my_external_input:= num_tbl_type(1, 4);
Changing the Internal Cursor Definition
The cursor looks about the same. You can just as easily use an IN-LIST now that there is only one search parameter.
CURSOR find_course_cur IS
SELECT crs.depid, crs.course
FROM zz_course_new crs,
(SELECT column_value as recid
FROM TABLE (CAST (p_search_ids AS num_tbl_type))
) search_values
WHERE crs.recid = search_values.recid;
The Actual SEARCH Call and Output
The searching portion of this operation is now isolated and dynamic. It does not need to be changed. All the Changes happen in the calling PL/SQL block where the search ID values are a lot easier to read and change.
DECLARE
my_input_external num_tbl_type:= num_tbl_type();
BEGIN
my_input_external.extend(3);
my_input_external:= num_tbl_type(1,3,22);
ZZ_PROC_COURSE_SEARCH (p_search_ids => my_input_external);
END;
-- The OUTPUT (Currently set to DBMS_OUT)
DEPID: 1, COURSE: A
DEPID: 4, COURSE: D
DEPID: 7, COURSE: G
Statement processed.
0.01 seconds
qid & accept id:
(22697790, 22697920)
query:
Get the difference returned from two queries as the return of one query
soup:
You can just subtract the two values:
\nSELECT (SELECT COUNT(ID)\n FROM Used\n WHERE ID = 54\n AND QTY = 1.875\n AND DateReceived = '2014-03-27 00:00:00'\n AND VendorID = 12400\n AND WithDrawn = 0) -\n (SELECT COUNT(ID)\n FROM Used\n WHERE ID = 54\n AND QTY = 1.875\n AND DateReceived = '2014-03-27 00:00:00'\n AND VendorID = 12400\n AND WithDrawn = 1);\n
\nAlternatively, construct a value of +1 or -1 for each record, and take the sum over that:
\nSELECT SUM(CASE WithDrawn WHEN 0 THEN 1 ELSE -1 END)\nFROM Used\nWHERE ID = 54\n AND QTY = 1.875\n AND DateReceived = '2014-03-27 00:00:00'\n AND VendorID = 12400;\n
\n
soup wrap:
You can just subtract the two values:
SELECT (SELECT COUNT(ID)
FROM Used
WHERE ID = 54
AND QTY = 1.875
AND DateReceived = '2014-03-27 00:00:00'
AND VendorID = 12400
AND WithDrawn = 0) -
(SELECT COUNT(ID)
FROM Used
WHERE ID = 54
AND QTY = 1.875
AND DateReceived = '2014-03-27 00:00:00'
AND VendorID = 12400
AND WithDrawn = 1);
Alternatively, construct a value of +1 or -1 for each record, and take the sum over that:
SELECT SUM(CASE WithDrawn WHEN 0 THEN 1 ELSE -1 END)
FROM Used
WHERE ID = 54
AND QTY = 1.875
AND DateReceived = '2014-03-27 00:00:00'
AND VendorID = 12400;
qid & accept id:
(22724852, 22725833)
query:
Oracle combining two monthly sums from to different tables
soup:
You can union the results together, and then sum those results. Keep in mind that you are crossing years based on the OP. If this is not the intent, then I also provided an alternative grouped by year and month.
\nGrouped by month:
\nSELECT c1.monthNum\n , sum(c1.cost) as cost\nFROM \n(\n SELECT to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1) as cost\n FROM table1 t1\n WHERE ..your table1 where clause here...\n GROUP BY to_char(t1.date1, 'MM') \n\n UNION ALL\n\n SELECT to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1) as cost\n FROM table1 t2\n WHERE ..your table2 where clause here... \n GROUP BY to_char(t2.date1, 'MM')\n) c1\nGROUP BY c1.monthNum\n
\nOR Grouped by year:
\nSELECT c1.yearNum\n , c1.monthNum\n , sum(c1.cost) as cost\nFROM \n(\n SELECT to_char(t1.date1, 'YYYY') AS yearNum, to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1) as cost\n FROM table1 t1\n WHERE ..your table1 where clause here...\n GROUP BY to_char(t1.date1, 'YYYY'), to_char(t1.date1, 'MM') \n\n UNION ALL\n\n SELECT to_char(t2.date1, 'YYYY') AS yearNum, to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1) as cost\n FROM table1 t2\n WHERE ..your table2 where clause here... \n GROUP BY to_char(t2.date1, 'YYYY'), to_char(t2.date1, 'MM')\n) c1\nGROUP BY c1.yearNum, c1.monthNum\n
\n
soup wrap:
You can union the results together, and then sum those results. Keep in mind that you are crossing years based on the OP. If this is not the intent, then I also provided an alternative grouped by year and month.
Grouped by month:
SELECT c1.monthNum
, sum(c1.cost) as cost
FROM
(
SELECT to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1) as cost
FROM table1 t1
WHERE ..your table1 where clause here...
GROUP BY to_char(t1.date1, 'MM')
UNION ALL
SELECT to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1) as cost
FROM table1 t2
WHERE ..your table2 where clause here...
GROUP BY to_char(t2.date1, 'MM')
) c1
GROUP BY c1.monthNum
OR Grouped by year:
SELECT c1.yearNum
, c1.monthNum
, sum(c1.cost) as cost
FROM
(
SELECT to_char(t1.date1, 'YYYY') AS yearNum, to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1) as cost
FROM table1 t1
WHERE ..your table1 where clause here...
GROUP BY to_char(t1.date1, 'YYYY'), to_char(t1.date1, 'MM')
UNION ALL
SELECT to_char(t2.date1, 'YYYY') AS yearNum, to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1) as cost
FROM table1 t2
WHERE ..your table2 where clause here...
GROUP BY to_char(t2.date1, 'YYYY'), to_char(t2.date1, 'MM')
) c1
GROUP BY c1.yearNum, c1.monthNum
qid & accept id:
(22738933, 22743474)
query:
What are the ways to store and search complex numeric data?
soup:
I recommend using Apache Solr to index and search your data.
\nHow you use Solr depends on your requirements. I use it as a searchable cache of my data. Extremely useful when the raw master data must be keep as files. Lots of frameworks integrate Solr as their search backend.
\nFor building front-ends to a Solr index, checkout solr-ajax.
\nExample
\nInstall Solr
\nDownload Solr distribution:
\nwget http://www.apache.org/dist/lucene/solr/4.7.0/solr-4.7.0.tgz\ntar zxvf solr-4.7.0.tgz\n
\nStart Solr using embedded Jetty container:
\ncd solr-4.7.0/example\njava -jar start.jar\n
\nSolr should now be running locally
\nhttp://localhost:8983/solr\n
\ndata.xml
\nYou did not specify a data format so I used the native XML supported by Solr:
\n\n \n 1 \n Dog \n Spotted \n John \n White \n 10 \n 11 \n \n \n 2 \n Cat \n Striped \n Jane \n White \n 5 \n \n \n
\nNotes:
\n\n- Every document in Solr must have a unique id
\n- The field names have a trailing "_s" and "_i" in their names to indicate field types. This is a cheat to take advantage of Solr's dynamic field feature.
\n
\nIndex XML file
\nLots of ways to get data into Solr. The simplest way is the curl command:
\ncurl http://localhost:8983/solr/update?commit=true -H "Content-Type: text/xml" --data-binary @data.xml\n
\nIt's worth noting that Solr supports other data formats, such as JSON and CSV.
\nSearch indexed file
\nAgain there are language libraries to support Solr searches, the following examples use curl. The Solr search syntax is along the lines you've required.
\nHere's a simple example:
\n$ curl http://localhost:8983/solr/select/?q=toy_type_s:Cat\n\n \n 0 \n 1 \n \n toy_type_s:Cat \n \n \n \n \n 2 \n Cat \n Striped \n Jane \n White \n 5 \n 1463999035283079168 \n \n \n \n
\nA more complex search example:
\n$ curl "http://localhost:8983/solr/select/?q=toy_type_s:Cat%20AND%20estimated_spots_i:\[2%20TO%206\]" \n\n \n 0 \n 2 \n \n toy_type_s:Cat AND estimated_spots_i:[2 TO 6] \n \n \n \n \n 2 \n Cat \n Striped \n Jane \n White \n 5 \n 1463999035283079168 \n \n \n \n
\n
soup wrap:
I recommend using Apache Solr to index and search your data.
How you use Solr depends on your requirements. I use it as a searchable cache of my data. Extremely useful when the raw master data must be keep as files. Lots of frameworks integrate Solr as their search backend.
For building front-ends to a Solr index, checkout solr-ajax.
Example
Install Solr
Download Solr distribution:
wget http://www.apache.org/dist/lucene/solr/4.7.0/solr-4.7.0.tgz
tar zxvf solr-4.7.0.tgz
Start Solr using embedded Jetty container:
cd solr-4.7.0/example
java -jar start.jar
Solr should now be running locally
http://localhost:8983/solr
data.xml
You did not specify a data format so I used the native XML supported by Solr:
1
Dog
Spotted
John
White
10
11
2
Cat
Striped
Jane
White
5
Notes:
- Every document in Solr must have a unique id
- The field names have a trailing "_s" and "_i" in their names to indicate field types. This is a cheat to take advantage of Solr's dynamic field feature.
Index XML file
Lots of ways to get data into Solr. The simplest way is the curl command:
curl http://localhost:8983/solr/update?commit=true -H "Content-Type: text/xml" --data-binary @data.xml
It's worth noting that Solr supports other data formats, such as JSON and CSV.
Search indexed file
Again there are language libraries to support Solr searches, the following examples use curl. The Solr search syntax is along the lines you've required.
Here's a simple example:
$ curl http://localhost:8983/solr/select/?q=toy_type_s:Cat
0
1
toy_type_s:Cat
2
Cat
Striped
Jane
White
5
1463999035283079168
A more complex search example:
$ curl "http://localhost:8983/solr/select/?q=toy_type_s:Cat%20AND%20estimated_spots_i:\[2%20TO%206\]"
0
2
toy_type_s:Cat AND estimated_spots_i:[2 TO 6]
2
Cat
Striped
Jane
White
5
1463999035283079168
qid & accept id:
(22742235, 22744674)
query:
comparing the two values line by line from two different text files
soup:
Maybe try:
\npaste a.txt b.txt | sed -n '/\([0-9]\+\)[[:space:]]\+\1/p' > c.txt\n
\nc.txt will contain:
\n10 10\n
\nAnd
\npaste a.txt b.txt | sed '/\([0-9]\+\)[[:space:]]\+\1/d' > d.txt\n
\nd.txt will contain:
\n20 30\n30 20\n
\n
soup wrap:
Maybe try:
paste a.txt b.txt | sed -n '/\([0-9]\+\)[[:space:]]\+\1/p' > c.txt
c.txt will contain:
10 10
And
paste a.txt b.txt | sed '/\([0-9]\+\)[[:space:]]\+\1/d' > d.txt
d.txt will contain:
20 30
30 20
qid & accept id:
(22783242, 22811989)
query:
How to read XML column in SQL Server 2008?
soup:
with xmlnamespaces('http://schemas.microsoft.com/office/infopath/2003/myXSD/2014-03-29T09:41:23' as my)\nselect M.XMLData.value('(/my:myFields/my:field1/text())[1]', 'int') as field1,\n M.XMLData.value('(/my:myFields/my:field2/text())[1]', 'int') as field2,\n M.XMLData.value('(/my:myFields/my:field3/text())[1]', 'bit') as field3,\n M.XMLData.value('(/my:myFields/my:FormName/text())[1]', 'datetime') as FormName,\n (\n select ','+R.X.value('text()[1]', 'nvarchar(max)')\n from M.XMLData.nodes('/my:myFields/my:Repeating') as R(X)\n for xml path(''), type\n ).value('substring(text()[1], 2)', 'nvarchar(max)') as Repeating\nfrom XMLMain as M\n
\nResult:
\nfield1 field2 field3 FormName Repeating\n----------- ----------- ------ ----------------------- -----------------------\n1 2 1 2014-04-01 15:11:47.000 hi,hello,how are you?\n
\n
soup wrap:
with xmlnamespaces('http://schemas.microsoft.com/office/infopath/2003/myXSD/2014-03-29T09:41:23' as my)
select M.XMLData.value('(/my:myFields/my:field1/text())[1]', 'int') as field1,
M.XMLData.value('(/my:myFields/my:field2/text())[1]', 'int') as field2,
M.XMLData.value('(/my:myFields/my:field3/text())[1]', 'bit') as field3,
M.XMLData.value('(/my:myFields/my:FormName/text())[1]', 'datetime') as FormName,
(
select ','+R.X.value('text()[1]', 'nvarchar(max)')
from M.XMLData.nodes('/my:myFields/my:Repeating') as R(X)
for xml path(''), type
).value('substring(text()[1], 2)', 'nvarchar(max)') as Repeating
from XMLMain as M
Result:
field1 field2 field3 FormName Repeating
----------- ----------- ------ ----------------------- -----------------------
1 2 1 2014-04-01 15:11:47.000 hi,hello,how are you?
qid & accept id:
(22861123, 22862746)
query:
Preventing removal of rows in a SQL query based on ordinal position
soup:
Adjusted script to allow for gaps in Sequence
\nDECLARE @t TABLE(Text char(5), Sequence int)\nINSERT @t VALUES\n('ITEM1',1),('ITEM1',2),('ITEM1',3),('ITEM2',4),('ITEM2',5),\n('ITEM3',6),('ITEM2',7),('ITEM2',8),('ITEM1',9),('ITEM1',10)\n\n;WITH x as\n(\n SELECT Text,Sequence,\n row_number() OVER (order by Sequence)\n - row_number() OVER (partition by text order by Sequence) grp\n FROM @t\n)\nSELECT text, MIN(Sequence) seq, \nFROM x\nGROUP BY text, grp\nORDER BY seq\n
\nResult:
\ntext seq\nITEM1 1\nITEM2 4\nITEM3 6\nITEM2 7\nITEM1 9\n
\n
soup wrap:
Adjusted script to allow for gaps in Sequence
DECLARE @t TABLE(Text char(5), Sequence int)
INSERT @t VALUES
('ITEM1',1),('ITEM1',2),('ITEM1',3),('ITEM2',4),('ITEM2',5),
('ITEM3',6),('ITEM2',7),('ITEM2',8),('ITEM1',9),('ITEM1',10)
;WITH x as
(
SELECT Text,Sequence,
row_number() OVER (order by Sequence)
- row_number() OVER (partition by text order by Sequence) grp
FROM @t
)
SELECT text, MIN(Sequence) seq,
FROM x
GROUP BY text, grp
ORDER BY seq
Result:
text seq
ITEM1 1
ITEM2 4
ITEM3 6
ITEM2 7
ITEM1 9
qid & accept id:
(22872278, 22872495)
query:
Remove duplicate address values where length of second column is less than the length of the greatest matching address
soup:
You could rebuild your data into a new table using
\nselect \naddress_1,max(address_2) as address_2, addressinfo\nfrom \ntable1 \ngroup by address_1,addressinfo\n
\nhttp://sqlfiddle.com/#!6/3d22c/2
\nEdit 1: \nTo select city and state as well you need to include it as a group by expression:
\nselect \naddress_1,max(address_2) as address_2, addressinfo,\ncity, state\nfrom \ntable1 \ngroup by address_1,addressinfo, city, state\n
\nhttp://sqlfiddle.com/#!6/4527c/1
\nEdit 2: \nThe max function does deliver the longest value here as needed. This works if the shorter values are true starts of the longer values.
\nHere is an example of this: http://sqlfiddle.com/#!6/3fba8/1
\n
soup wrap:
You could rebuild your data into a new table using
select
address_1,max(address_2) as address_2, addressinfo
from
table1
group by address_1,addressinfo
http://sqlfiddle.com/#!6/3d22c/2
Edit 1:
To select city and state as well you need to include it as a group by expression:
select
address_1,max(address_2) as address_2, addressinfo,
city, state
from
table1
group by address_1,addressinfo, city, state
http://sqlfiddle.com/#!6/4527c/1
Edit 2:
The max function does deliver the longest value here as needed. This works if the shorter values are true starts of the longer values.
Here is an example of this: http://sqlfiddle.com/#!6/3fba8/1
qid & accept id:
(22876321, 22876446)
query:
Join 3 tables and select only the top average for each category
soup:
Try this:
\nSELECT T1.Name, T1.Category, T1.Average\nFROM\n(SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC) T1\n\nLEFT JOIN (\nSELECT B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC) T2 on T2.Average> T1.Average AND T1.Category= T2.Category\nWHERE T2.Name IS NULL\n
\nOR
\nSELECT Name,Category,Average FROM\n(\nSELECT ROW_NUMBER() OVER(Partition By Category ORDER BY AVG(R1.Stars) DESC) as RN, B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC\n) T\nWHERE RN=1\n
\n
soup wrap:
Try this:
SELECT T1.Name, T1.Category, T1.Average
FROM
(SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC) T1
LEFT JOIN (
SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC) T2 on T2.Average> T1.Average AND T1.Category= T2.Category
WHERE T2.Name IS NULL
OR
SELECT Name,Category,Average FROM
(
SELECT ROW_NUMBER() OVER(Partition By Category ORDER BY AVG(R1.Stars) DESC) as RN, B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC
) T
WHERE RN=1
qid & accept id:
(22909997, 22910064)
query:
Get an array of all columns starting with the same characters.
soup:
MySQL LIKE to the resque:
\nSELECT col1 FROM table1 WHERE col1 LIKE 'FEL%';\n
\nThis way you have to add all cases using OR.
\nAlternative - REGEXP:
\nSELECT col1 FROM table1 WHERE col1 REGEXP '(FEL|PRO|VAI).*'\n
\nThen it's just a matter of writing proper regex.
\nI would use extra col to group your items - to avoid such selecting altogether (which should be quite expensive on bigger dataset).
\nhttps://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp
\n
soup wrap:
MySQL LIKE to the resque:
SELECT col1 FROM table1 WHERE col1 LIKE 'FEL%';
This way you have to add all cases using OR.
Alternative - REGEXP:
SELECT col1 FROM table1 WHERE col1 REGEXP '(FEL|PRO|VAI).*'
Then it's just a matter of writing proper regex.
I would use extra col to group your items - to avoid such selecting altogether (which should be quite expensive on bigger dataset).
https://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp
qid & accept id:
(22910039, 22911326)
query:
SQL select id from a table to query again all at once
soup:
Should be the last message so either max(id) or latest datetime in this case, counter_party_id is just an user id the most recent counter_party_id does not mean the max counter_party_id(I found the solution in the answers and I gave props):
\nSELECT * \nFROM yourTable \nWHERE counter_party_id = ( SELECT MAX(id) FROM yourTable )\n
\nor
\nSELECT * \nFROM yourTable \nWHERE counter_party_id = ( SELECT counter_party_id FROM yourTable ORDER BY m.time_send DESC LIMIT 1)\n
\nReason being is that I simplified the example but I had to implement this in a much more complicated scheme.
\n
soup wrap:
Should be the last message so either max(id) or latest datetime in this case, counter_party_id is just an user id the most recent counter_party_id does not mean the max counter_party_id(I found the solution in the answers and I gave props):
SELECT *
FROM yourTable
WHERE counter_party_id = ( SELECT MAX(id) FROM yourTable )
or
SELECT *
FROM yourTable
WHERE counter_party_id = ( SELECT counter_party_id FROM yourTable ORDER BY m.time_send DESC LIMIT 1)
Reason being is that I simplified the example but I had to implement this in a much more complicated scheme.
qid & accept id:
(22914453, 22914977)
query:
Change column data type in MySQL without losing other metadata (DEFAULT, NOTNULL...)
soup:
As it's stated in manual page, ALTER TABLE requires all new type attributes to be defined.
\nHowever, there is a way to overcome this. You may use INFORMATION_SCHEMA meta-data to reconstruct desired ALTER query. for example, if we have simple table:
\n\nmysql> DESCRIBE t;\n+-------+------------------+------+-----+---------+----------------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+------------------+------+-----+---------+----------------+\n| id | int(11) unsigned | NO | PRI | NULL | auto_increment |\n| value | varchar(255) | NO | | NULL | |\n+-------+------------------+------+-----+---------+----------------+\n2 rows in set (0.01 sec)\n
\nthen we can reproduce our alter statement with:
\nSELECT \n CONCAT(\n COLUMN_NAME, \n ' @new_type', \n IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), \n EXTRA\n ) AS s\nFROM \n INFORMATION_SCHEMA.COLUMNS \nWHERE \n TABLE_SCHEMA='test' \n AND \n TABLE_NAME='t'\n
\nthe result would be:
\n\n+--------------------------------------+\n| s |\n+--------------------------------------+\n| id @new_type NOT NULL auto_increment |\n| value @new_type NOT NULL |\n+--------------------------------------+\n
\nHere I've left @new_type to indicate that we can use variable for that (or even substitute our new type directly to query). With variable that would be:
\n\nSet our variables.
\nmysql> SET @new_type := 'VARCHAR(10)', @column_name := 'value';\nQuery OK, 0 rows affected (0.00 sec)\n
\nPrepare variable for prepared statement (it's long query, but I've left explanations above):
\nSET @sql = (SELECT CONCAT('ALTER TABLE t CHANGE `',COLUMN_NAME, '` `', COLUMN_NAME, '` ', @new_type, IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), EXTRA) AS s FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t' AND COLUMN_NAME=@column_name);\n
\nPrepare statement:
\nmysql> prepare stmt from @sql;\nQuery OK, 0 rows affected (0.00 sec)\nStatement prepared\n
\nFinally, execute it:
\nmysql> execute stmt;\nQuery OK, 0 rows affected (0.22 sec)\nRecords: 0 Duplicates: 0 Warnings: 0\n
\n
\nThen we'll get our data type changed to VARCHAR(10) with saving all the rest specifiers:
\n\nmysql> DESCRIBE t;\n+-------+------------------+------+-----+---------+----------------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+------------------+------+-----+---------+----------------+\n| id | int(11) unsigned | NO | PRI | NULL | auto_increment |\n| value | varchar(10) | NO | | NULL | |\n+-------+------------------+------+-----+---------+----------------+\n2 rows in set (0.00 sec)\n
\n
soup wrap:
As it's stated in manual page, ALTER TABLE requires all new type attributes to be defined.
However, there is a way to overcome this. You may use INFORMATION_SCHEMA meta-data to reconstruct desired ALTER query. for example, if we have simple table:
mysql> DESCRIBE t;
+-------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| value | varchar(255) | NO | | NULL | |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.01 sec)
then we can reproduce our alter statement with:
SELECT
CONCAT(
COLUMN_NAME,
' @new_type',
IF(IS_NULLABLE='NO', ' NOT NULL ', ' '),
EXTRA
) AS s
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA='test'
AND
TABLE_NAME='t'
the result would be:
+--------------------------------------+
| s |
+--------------------------------------+
| id @new_type NOT NULL auto_increment |
| value @new_type NOT NULL |
+--------------------------------------+
Here I've left @new_type to indicate that we can use variable for that (or even substitute our new type directly to query). With variable that would be:
Set our variables.
mysql> SET @new_type := 'VARCHAR(10)', @column_name := 'value';
Query OK, 0 rows affected (0.00 sec)
Prepare variable for prepared statement (it's long query, but I've left explanations above):
SET @sql = (SELECT CONCAT('ALTER TABLE t CHANGE `',COLUMN_NAME, '` `', COLUMN_NAME, '` ', @new_type, IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), EXTRA) AS s FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t' AND COLUMN_NAME=@column_name);
Prepare statement:
mysql> prepare stmt from @sql;
Query OK, 0 rows affected (0.00 sec)
Statement prepared
Finally, execute it:
mysql> execute stmt;
Query OK, 0 rows affected (0.22 sec)
Records: 0 Duplicates: 0 Warnings: 0
Then we'll get our data type changed to VARCHAR(10) with saving all the rest specifiers:
mysql> DESCRIBE t;
+-------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| value | varchar(10) | NO | | NULL | |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
qid & accept id:
(22921153, 22921334)
query:
SQL - find all records where col like
soup:
Use UNION ALL operator and basic join:
\nSELECT t.* \nFROM TABLENAME t\nJOIN(\n SELECT '123 ' As pattern FROM dual UNION ALL\n SELECT '245 ' FROM dual UNION ALL\n SELECT '234 ' FROM dual UNION ALL\n SELECT '323 ' FROM dual UNION ALL\n SELECT '163 ' FROM dual \n) p\nON t.col1 LIKE '%' || p.pattern || '%'\n
\ndemo: http://sqlfiddle.com/#!4/a914f/2
\n
\nEDIT
\n
\nIf there is another table that contains pattern values, the task is even easier, just:
\nSELECT t.* \nFROM TABLENAME t\nJOIN AnotherTable p\nON t.col1 LIKE '%' || p.pattern || '%'\n
\nDemo: http://sqlfiddle.com/#!4/e0318/1
\n
soup wrap:
Use UNION ALL operator and basic join:
SELECT t.*
FROM TABLENAME t
JOIN(
SELECT '123 ' As pattern FROM dual UNION ALL
SELECT '245 ' FROM dual UNION ALL
SELECT '234 ' FROM dual UNION ALL
SELECT '323 ' FROM dual UNION ALL
SELECT '163 ' FROM dual
) p
ON t.col1 LIKE '%' || p.pattern || '%'
demo: http://sqlfiddle.com/#!4/a914f/2
EDIT
If there is another table that contains pattern values, the task is even easier, just:
SELECT t.*
FROM TABLENAME t
JOIN AnotherTable p
ON t.col1 LIKE '%' || p.pattern || '%'
Demo: http://sqlfiddle.com/#!4/e0318/1
qid & accept id:
(22959571, 22959761)
query:
SQL: Limit by unknown number of occurences
soup:
That's easy. You must use a where clause and evaluate the minimum type there.
\nSELECT * \nFROM mytable\nWHERE type = (select min(type) from mytable) \nORDER BY id;\n
\nEDIT: Do the same with max() if you want to get the maximum type records.
\nEDIT: In case the types are not ascending as in your example, you will have to get the type of the minimum/maximum id instead of getting the minimum/maximum type:
\nSELECT * \nFROM mytable\nWHERE type = (select type from mytable where id = (select min(id) from mytable)) \nORDER BY id;\n
\n
soup wrap:
That's easy. You must use a where clause and evaluate the minimum type there.
SELECT *
FROM mytable
WHERE type = (select min(type) from mytable)
ORDER BY id;
EDIT: Do the same with max() if you want to get the maximum type records.
EDIT: In case the types are not ascending as in your example, you will have to get the type of the minimum/maximum id instead of getting the minimum/maximum type:
SELECT *
FROM mytable
WHERE type = (select type from mytable where id = (select min(id) from mytable))
ORDER BY id;
qid & accept id:
(22963994, 22964432)
query:
(Query) Number of tries before the first correct solution
soup:
Possibly using a sub query (not tested):-
\nSELECT problem_id, IF(b.user_id IS NULL, 0, COUNT(*))\nFROM solution a\nLEFT OUTER JOIN\n(\n SELECT user_id, problem_id, MIN(date) AS min_date\n FROM solution\n WHERE correct = true\n GROUP BY user_id, problem_id\n) b\nON a.problem_id = b.problem_id\nAND a.user_id = b.user_id\nAND a.date < b.min_date\nWHERE a.user_id = ?\nGROUP BY problem_id\n
\nEDIT - Having played with the test data I think I may have a solution. Not sure if there are any edge cases it fails on though:-
\nSELECT a.user_id, a.problem_id, SUM(IF(b.user_id IS NULL OR a.date <= b.min_date, 1, 0))\nFROM solution a\nLEFT OUTER JOIN \n(\n SELECT user_id, problem_id, MIN(date) AS min_date\n FROM solution\n WHERE correct = 'true'\n GROUP BY user_id, problem_id\n) b\nON a.problem_id = b.problem_id\nAND a.user_id = b.user_id\nGROUP BY a.user_id, problem_id\n
\nThis has a sub query to find the lowest date with a correct solution for a user problem and joins that against the list of solutions. It the does a SUM of 1 or 0, with a row counting as 1 if there is no correct solution, or if there is a correct solution and the date of that correct solution is greater or equal this this solutions date.
\nSQL fiddle for it here:-
\nhttp://www.sqlfiddle.com/#!2/f48e11/1
\n
soup wrap:
Possibly using a sub query (not tested):-
SELECT problem_id, IF(b.user_id IS NULL, 0, COUNT(*))
FROM solution a
LEFT OUTER JOIN
(
SELECT user_id, problem_id, MIN(date) AS min_date
FROM solution
WHERE correct = true
GROUP BY user_id, problem_id
) b
ON a.problem_id = b.problem_id
AND a.user_id = b.user_id
AND a.date < b.min_date
WHERE a.user_id = ?
GROUP BY problem_id
EDIT - Having played with the test data I think I may have a solution. Not sure if there are any edge cases it fails on though:-
SELECT a.user_id, a.problem_id, SUM(IF(b.user_id IS NULL OR a.date <= b.min_date, 1, 0))
FROM solution a
LEFT OUTER JOIN
(
SELECT user_id, problem_id, MIN(date) AS min_date
FROM solution
WHERE correct = 'true'
GROUP BY user_id, problem_id
) b
ON a.problem_id = b.problem_id
AND a.user_id = b.user_id
GROUP BY a.user_id, problem_id
This has a sub query to find the lowest date with a correct solution for a user problem and joins that against the list of solutions. It the does a SUM of 1 or 0, with a row counting as 1 if there is no correct solution, or if there is a correct solution and the date of that correct solution is greater or equal this this solutions date.
SQL fiddle for it here:-
http://www.sqlfiddle.com/#!2/f48e11/1
qid & accept id:
(22986618, 22987277)
query:
Compare financial data from this week to the same week last year
soup:
You don't sound confident you understand how you want (or more specifically how your boss wants to correlate the weeks' value from one year to another (go by month mainly and and it can be out by a week or 2).
\nHere is a starting point based on the data you shared
\nExample of last year's report
\nSELECT YEAR(`date`) AS `year`\n , WEEKOFYEAR(`date`) AS weekno\n ,Storecode AS storecode\n , SUM(amount) AS amount\nFROM transactions\nWHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))\nGROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode\n
\nHere is an example of that query with comparisons
\nSELECT this.storecode \n , this.weekno\n , this.amount AS current_amount\n , history.amount AS past_amount\nFROM (SELECT YEAR(`date`) AS `year`\n , WEEKOFYEAR(`date`) AS weekno\n ,Storecode AS storecode\n , SUM(amount) AS amount\n FROM transactions\n WHERE YEAR(`date`) = YEAR(NOW())\n GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS this\nJOIN (SELECT YEAR(`date`) AS `year`\n , WEEKOFYEAR(`date`) AS weekno\n ,Storecode AS storecode\n , SUM(amount) AS amount\n FROM transactions\n WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))\n GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS history\n ON this.weekno = history.weekno\n AND this.storecode = history.storecode;\n
\n
soup wrap:
You don't sound confident you understand how you want (or more specifically how your boss wants to correlate the weeks' value from one year to another (go by month mainly and and it can be out by a week or 2).
Here is a starting point based on the data you shared
Example of last year's report
SELECT YEAR(`date`) AS `year`
, WEEKOFYEAR(`date`) AS weekno
,Storecode AS storecode
, SUM(amount) AS amount
FROM transactions
WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))
GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode
Here is an example of that query with comparisons
SELECT this.storecode
, this.weekno
, this.amount AS current_amount
, history.amount AS past_amount
FROM (SELECT YEAR(`date`) AS `year`
, WEEKOFYEAR(`date`) AS weekno
,Storecode AS storecode
, SUM(amount) AS amount
FROM transactions
WHERE YEAR(`date`) = YEAR(NOW())
GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS this
JOIN (SELECT YEAR(`date`) AS `year`
, WEEKOFYEAR(`date`) AS weekno
,Storecode AS storecode
, SUM(amount) AS amount
FROM transactions
WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))
GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS history
ON this.weekno = history.weekno
AND this.storecode = history.storecode;
qid & accept id:
(23034365, 23039408)
query:
How to exclude a word from a regular expression in oracle?
soup:
Oracle does not support lookaheads.\nWith the products as you show, you can use this:
\nSELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+(\s*\d+)*', 'c');\n
\nThis is only based on the product names you have shown. If it does not catch everything you want, give us a better idea of what we are trying to match.
\nAnother option: it's a hack, but if you're confident that "product_digits " should never be followed by a "t", you can use this:
\nSELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+($|\s)($|[^t]).*', 'c');\n
\n
soup wrap:
Oracle does not support lookaheads.
With the products as you show, you can use this:
SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+(\s*\d+)*', 'c');
This is only based on the product names you have shown. If it does not catch everything you want, give us a better idea of what we are trying to match.
Another option: it's a hack, but if you're confident that "product_digits " should never be followed by a "t", you can use this:
SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+($|\s)($|[^t]).*', 'c');
qid & accept id:
(23035651, 23038168)
query:
mySQL show logs within a time range from each message?
soup:
Rather than any sort of subquery, it sounds like what you want can be accomplished with a LEFT JOIN of the table against itself, but instead of a simple join condition, use the epoch BETWEEN... condition in the join's ON clause.
\nThe left side of the join will be filtered to username = 'bob' while the right side will locate messages in the related data ranges.
\nAdd a DISTINCT to deduplicate rows if needed.
\nSELECT\n DISTINCT\n rng.epoch,\n rng.username,\n rng.message\nFROM\n logs AS main\n LEFT JOIN logs as rng \n /* Join the epoch values from the table to related rows within 3 hours */\n ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)\n/* filter the main one for the desired username */\nWHERE main.username = 'bob'\n
\nWhat isn't clear from your question yet is whether you ultimately only want bob's rows returned. If that is the case, both sides of the join need to be filtered in the WHERE clause, or usernames matched in the ON clause:
\nFROM\n logs AS main\n LEFT JOIN logs as rng \n ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)\n /* match usernames so the related rows are only bob's\n AND main.username = rng.username\n
\n
soup wrap:
Rather than any sort of subquery, it sounds like what you want can be accomplished with a LEFT JOIN of the table against itself, but instead of a simple join condition, use the epoch BETWEEN... condition in the join's ON clause.
The left side of the join will be filtered to username = 'bob' while the right side will locate messages in the related data ranges.
Add a DISTINCT to deduplicate rows if needed.
SELECT
DISTINCT
rng.epoch,
rng.username,
rng.message
FROM
logs AS main
LEFT JOIN logs as rng
/* Join the epoch values from the table to related rows within 3 hours */
ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)
/* filter the main one for the desired username */
WHERE main.username = 'bob'
What isn't clear from your question yet is whether you ultimately only want bob's rows returned. If that is the case, both sides of the join need to be filtered in the WHERE clause, or usernames matched in the ON clause:
FROM
logs AS main
LEFT JOIN logs as rng
ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)
/* match usernames so the related rows are only bob's
AND main.username = rng.username
qid & accept id:
(23069422, 23069632)
query:
How to create MySQL database for "type" quiz
soup:
Well Basically you want
\nQuestions, ID and Text\nChoices ID, QuestionID and the text\n
\nAnswers is just
\nQuestionID, ChoiceID\n\n\nQuestions Table\nId Text \n1 'What is your favourite colour?'\n\nChoices Table\nId, QuestionID, Text\n1 1 'Red'\n2 1 'Blue'\n3 1 'Green'\n4 1 'Pale Blue Green with yellow dots'\n\nAnswers\nVictimID QuestionID ChoiceID\n(userID?)1 4\n
\n
soup wrap:
Well Basically you want
Questions, ID and Text
Choices ID, QuestionID and the text
Answers is just
QuestionID, ChoiceID
Questions Table
Id Text
1 'What is your favourite colour?'
Choices Table
Id, QuestionID, Text
1 1 'Red'
2 1 'Blue'
3 1 'Green'
4 1 'Pale Blue Green with yellow dots'
Answers
VictimID QuestionID ChoiceID
(userID?)1 4
qid & accept id:
(23091177, 23091405)
query:
Find lowest value in particular group
soup:
What you need is a correlated subquery with a group by.
\nOne way to do this which is easy to follow is:
\nSELECT column1, name, column2\nFROM MyTable as mt1\nWHERE column1 in (SELECT Min(column1) FROM MyTable as mt2 GROUP BY column2)\n
\nBut a better, cleaner way:
\nSELECT column1, name, column2\nFROM MyTable as mt1\nINNER JOIN\n(SELECT Min(column1) as minc1 FROM MyTable as mt2 GROUP BY column2) as mt2\nON mt1.column1=mt2.minc1;\n
\n\nNote: These two forms should be supported by most DBMS's.
\n
soup wrap:
What you need is a correlated subquery with a group by.
One way to do this which is easy to follow is:
SELECT column1, name, column2
FROM MyTable as mt1
WHERE column1 in (SELECT Min(column1) FROM MyTable as mt2 GROUP BY column2)
But a better, cleaner way:
SELECT column1, name, column2
FROM MyTable as mt1
INNER JOIN
(SELECT Min(column1) as minc1 FROM MyTable as mt2 GROUP BY column2) as mt2
ON mt1.column1=mt2.minc1;
Note: These two forms should be supported by most DBMS's.
qid & accept id:
(23096845, 23101361)
query:
How to find overlapping periods recursively in SQL Server
soup:
I would first work out where the islands are in your data set, and only after that, work out which ones are overlapped by your query ranges:
\ndeclare @t table (ID int,StartDate date,EndDate date)\ninsert into @t(ID,StartDate,EndDate) values\n(1 ,'20140105','20140110'),\n(2 ,'20140106','20140111'),\n(3 ,'20140107','20140112'),\n(4 ,'20140108','20140113'),\n(5 ,'20140109','20140114'),\n(6 ,'20140126','20140131'),\n(7 ,'20140127','20140201'),\n(8 ,'20140128','20140202'),\n(9 ,'20140129','20140203'),\n(10 ,'20140130','20140204')\n\ndeclare @Start date\ndeclare @End date\nselect @Start='20140106',@End='20140107'\n\n;With PotIslands as (\n --Find ranges which aren't overlapped at their start\n select StartDate,EndDate from @t t where\n not exists (select * from @t t2 where\n t2.StartDate < t.StartDate and\n t2.EndDate >= t.StartDate)\n union all\n --Extend the ranges by any other ranges which overlap on the end\n select pi.StartDate,t.EndDate\n from PotIslands pi\n inner join\n @t t\n on\n pi.EndDate >= t.StartDate and pi.EndDate < t.EndDate\n), Islands as (\n select StartDate,MAX(EndDate) as EndDate from PotIslands group by StartDate\n)\nselect * from Islands i where @Start <= i.EndDate and @End >= i.StartDate\n
\nResult:
\nStartDate EndDate\n---------- ----------\n2014-01-05 2014-01-14\n
\nIf you need the individual rows, you can now join the selected islands back to the @t table for a simple range query.
\nThis works because, for example, if any row within an island is ever included in a range, the entire remaining rows on an island will always also be included. So we find the islands first.
\n
soup wrap:
I would first work out where the islands are in your data set, and only after that, work out which ones are overlapped by your query ranges:
declare @t table (ID int,StartDate date,EndDate date)
insert into @t(ID,StartDate,EndDate) values
(1 ,'20140105','20140110'),
(2 ,'20140106','20140111'),
(3 ,'20140107','20140112'),
(4 ,'20140108','20140113'),
(5 ,'20140109','20140114'),
(6 ,'20140126','20140131'),
(7 ,'20140127','20140201'),
(8 ,'20140128','20140202'),
(9 ,'20140129','20140203'),
(10 ,'20140130','20140204')
declare @Start date
declare @End date
select @Start='20140106',@End='20140107'
;With PotIslands as (
--Find ranges which aren't overlapped at their start
select StartDate,EndDate from @t t where
not exists (select * from @t t2 where
t2.StartDate < t.StartDate and
t2.EndDate >= t.StartDate)
union all
--Extend the ranges by any other ranges which overlap on the end
select pi.StartDate,t.EndDate
from PotIslands pi
inner join
@t t
on
pi.EndDate >= t.StartDate and pi.EndDate < t.EndDate
), Islands as (
select StartDate,MAX(EndDate) as EndDate from PotIslands group by StartDate
)
select * from Islands i where @Start <= i.EndDate and @End >= i.StartDate
Result:
StartDate EndDate
---------- ----------
2014-01-05 2014-01-14
If you need the individual rows, you can now join the selected islands back to the @t table for a simple range query.
This works because, for example, if any row within an island is ever included in a range, the entire remaining rows on an island will always also be included. So we find the islands first.
qid & accept id:
(23106523, 23106632)
query:
SQL Joining Of Queries
soup:
You can use conditional aggregation:
\nSELECT i.DSTAMP, i.NAME,\n SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,\n SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight\nFROM inventory i\nWHERE i.code = 'In'\nGROUP BY i.DSTAMP, i.NAME;\n
\nEDIT:
\nTo group this just by date:
\nSELECT to_char(i.DSTAMP, 'YYYY-MM-DD') as yyyymmdd, i.NAME,\n SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,\n SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight\nFROM inventory i\nWHERE i.code = 'In'\nGROUP BY to_char(i.DSTAMP, 'YYYY-MM-DD'), i.NAME;\n
\nThis converts the value to a date string, which is fine for ordering.
\n
soup wrap:
You can use conditional aggregation:
SELECT i.DSTAMP, i.NAME,
SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,
SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight
FROM inventory i
WHERE i.code = 'In'
GROUP BY i.DSTAMP, i.NAME;
EDIT:
To group this just by date:
SELECT to_char(i.DSTAMP, 'YYYY-MM-DD') as yyyymmdd, i.NAME,
SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,
SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight
FROM inventory i
WHERE i.code = 'In'
GROUP BY to_char(i.DSTAMP, 'YYYY-MM-DD'), i.NAME;
This converts the value to a date string, which is fine for ordering.
qid & accept id:
(23116249, 23116789)
query:
string substitution from text file to another string
soup:
awk '{print "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES"; print "(\x27"$1"\x27,\x27"$2"\x27,\x27"$3"\x27);"}' input.txt\n
\nConverts your sample input to preferred output. It should work for multi line input.
\nEDIT
\nThe variables you are using in this line:
\ncat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";'\n
\nare only visible to awk and in this command. They are not shell variables.\nAlso in your sed commands remove those single quotes then you can get the values:
\nsed "s/EMAIL/$email/"\n
\n
soup wrap:
awk '{print "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES"; print "(\x27"$1"\x27,\x27"$2"\x27,\x27"$3"\x27);"}' input.txt
Converts your sample input to preferred output. It should work for multi line input.
EDIT
The variables you are using in this line:
cat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";'
are only visible to awk and in this command. They are not shell variables.
Also in your sed commands remove those single quotes then you can get the values:
sed "s/EMAIL/$email/"
qid & accept id:
(23124414, 23124493)
query:
Android auto refresh when new data inserted into listview
soup:
Call notifyDataSetChanged() on your Adapter.
\nSome additional specifics on how/when to call notifyDataSetChanged() can be viewed in this Google I/O video.
\nUse a Handler and its postDelayed method to invalidate the list's adapter as follows:
\nfinal Handler handler = new Handler()\nhandler.postDelayed( new Runnable() {\n\n @Override\n public void run() {\n adapter.notifyDataSetChanged();\n handler.postDelayed( this, 60 * 1000 );\n }\n}, 60 * 1000 );\n
\nYou must only update UI in the main (UI) thread.
\nBy creating the handler in the main thread, you ensure that everything you post to the handler is run in the main thread also.
\ntry\n {\n validat_user(receivedName);\n final Handler handler = new Handler();\n handler.postDelayed( new Runnable() {\n\n @Override\n public void run() {\n todoItems.clear();\n //alertDialog.cancel();\n validat_user(receivedName);\n handler.postDelayed( this, 60 * 1000 );\n }\n }, 60 * 1000 );\n\n\n }\n\n catch(Exception e)\n {\n display("Network error.\nPlease check with your network settings.");\n }\n
\nFirst validate user is first time load the data ,after using handler i can update the values every one minute
\nmy full code is below
\npackage com.example.employeeinduction;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.Iterator;\nimport java.util.List;\n\nimport org.apache.http.HttpResponse;\nimport org.apache.http.NameValuePair;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.entity.UrlEncodedFormEntity;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.impl.client.DefaultHttpClient;\nimport org.apache.http.message.BasicNameValuePair;\nimport org.apache.http.params.BasicHttpParams;\nimport org.apache.http.params.HttpConnectionParams;\nimport org.apache.http.params.HttpParams;\nimport org.json.JSONArray;\nimport org.json.JSONObject;\n\nimport android.app.Activity;\nimport android.app.AlertDialog;\nimport android.app.ProgressDialog;\nimport android.content.Context;\nimport android.content.DialogInterface;\nimport android.content.Intent;\nimport android.content.res.TypedArray;\nimport android.os.AsyncTask;\nimport android.os.Bundle;\nimport android.os.Handler;\nimport android.support.v4.widget.DrawerLayout;\nimport android.util.Log;\nimport android.view.Menu;\nimport android.view.MenuItem;\nimport android.view.View;\nimport android.widget.AdapterView;\nimport android.widget.AdapterView.OnItemClickListener;\nimport android.widget.ArrayAdapter;\nimport android.widget.ImageView;\nimport android.widget.ListView;\nimport android.widget.PopupMenu;\nimport android.widget.PopupMenu.OnMenuItemClickListener;\nimport android.widget.Toast;\n\n\npublic class pdf extends Activity\n{\n\n ImageView iv;\n public boolean connect=false,logged=false;\n public String db_select;\n ListView l1;\n AlertDialog alertDialog;\n String mPwd,UName1="Success",UName,ret,receivedName;\n public Iterator itr;\n //private String SERVICE_URL = "http://61.12.7.197:8080/pdf";\n //private String SERVICE_URL1 = "http://61.12.7.197:8080/url";\n //private final String SERVICE_URL = "http://10.54.3.208:8080/Employee/person/pdf";\n //private final String SERVICE_URL1 = "http://10.54.3.208:8080/Employee/person/url";\n private final String SERVICE_URL = Urlmanager.Address+"pdf";\n private final String SERVICE_URL1 = Urlmanager.Address+"url";\n private final String TAG = "Pdf";\n ArrayList todoItems;\n Boolean isInternetPresent = false;\n ConnectionDetector cd;\n ArrayAdapter aa;\n public List list1=new ArrayList();\n public DrawerLayout mDrawerLayout;\n public ListView mDrawerList;\n //public ActionBarDrawerToggle mDrawerToggle;\n\n // NavigationDrawer title "Nasdaq" in this example\n public CharSequence mDrawerTitle;\n\n // App title "Navigation Drawer" in this example \n public CharSequence mTitle;\n\n // slider menu items details \n public String[] navMenuTitles=null;\n public TypedArray navMenuIcons;\n\n public ArrayList navDrawerItems;\n public NavDrawerListAdapter adapter;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) \n {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.sliding_project);\n iv = (ImageView)findViewById(R.id.imageView2);\n l1 = (ListView)findViewById(R.id.list);\n\n\n mTitle = mDrawerTitle = getTitle();\n\n // getting items of slider from array\n navMenuTitles = getResources().getStringArray(R.array.nav_drawer_items);\n\n // getting Navigation drawer icons from res \n navMenuIcons = getResources()\n .obtainTypedArray(R.array.nav_drawer_icons);\n\n mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);\n mDrawerList = (ListView) findViewById(R.id.list_slidermenu);\n\n navDrawerItems = new ArrayList();\n\n\n // list item in slider at 1 Home Nasdaq details\n navDrawerItems.add(new NavDrawerItem(navMenuTitles[0], navMenuIcons.getResourceId(0, -1)));\n // list item in slider at 2 Facebook details\n navDrawerItems.add(new NavDrawerItem(navMenuTitles[1], navMenuIcons.getResourceId(1, -1)));\n // list item in slider at 3 Google details\n navDrawerItems.add(new NavDrawerItem(navMenuTitles[2], navMenuIcons.getResourceId(2, -1)));\n // list item in slider at 4 Apple details\n\n\n // Recycle array\n navMenuIcons.recycle();\n\n mDrawerList.setOnItemClickListener(new SlideMenuClickListener());\n\n // setting list adapter for Navigation Drawer\n adapter = new NavDrawerListAdapter(getApplicationContext(),\n navDrawerItems);\n mDrawerList.setAdapter(adapter);\n\n if (savedInstanceState == null) {\n displayView(0);\n }\n\n iv.setOnClickListener(new View.OnClickListener() {\n\n @Override\n public void onClick(View v) {\n\n\n PopupMenu popup = new PopupMenu(getBaseContext(), v);\n\n /** Adding menu items to the popumenu */\n popup.getMenuInflater().inflate(R.menu.main, popup.getMenu());\n\n popup.setOnMenuItemClickListener(new OnMenuItemClickListener() {\n\n @Override\n public boolean onMenuItemClick(MenuItem item) {\n\n switch (item.getItemId()){\n case R.id.Home:\n Intent a = new Intent(pdf.this,Design_Activity.class);\n startActivity(a);\n //Projects_Accel.this.finish();\n // return true;\n break;\n case R.id.Logout:\n /*Intent z = new Intent(this,MainActivity.class);\n z.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);\n startActivity(z);\n this.finish();*/\n Intent z = new Intent(pdf.this,MainActivity.class);\n z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | \n Intent.FLAG_ACTIVITY_CLEAR_TASK |\n Intent.FLAG_ACTIVITY_NEW_TASK);\n startActivity(z);\n pdf.this.finish();\n // return true;\n break;\n }\n\n return true;\n }\n });\n popup.show();\n }\n });\n\n todoItems = new ArrayList();\n aa = new ArrayAdapter(this,R.layout.list_row,R.id.title,todoItems);\n l1.setAdapter(aa);\n todoItems.clear();\n Intent intent = getIntent();\n receivedName = (String) intent.getSerializableExtra("PROJECT");\n cd = new ConnectionDetector(getApplicationContext());\n isInternetPresent = cd.isConnectingToInternet();\n if(isInternetPresent)\n {\n try\n {\n validat_user(receivedName);\n final Handler handler = new Handler();\n handler.postDelayed( new Runnable() {\n\n @Override\n public void run() {\n todoItems.clear();\n //alertDialog.cancel();\n validat_user(receivedName);\n handler.postDelayed( this, 60 * 1000 );\n }\n }, 60 * 1000 );\n\n\n }\n\n catch(Exception e)\n {\n display("Network error.\nPlease check with your network settings.");\n }\n }\n else\n {\n display("No Internet Connection..");\n }\n\n l1.setOnItemClickListener(new OnItemClickListener() {\n public void onItemClick(AdapterView> parent, View view,\n int position, long id) {\n\n String name=(String)parent.getItemAtPosition(position);\n\n /*Toast.makeText(getBaseContext(), name, Toast.LENGTH_LONG).show();\n Intent i = new Intent(getBaseContext(),Webview.class);\n i.putExtra("USERNAME", name);\n startActivity(i);*/\n cd = new ConnectionDetector(getApplicationContext());\n isInternetPresent = cd.isConnectingToInternet();\n if(isInternetPresent)\n {\n try\n {\n validat_user1(receivedName,name);\n\n }\n catch(Exception e)\n {\n display("Network error.\nPlease check with your network settings.");\n\n }\n\n }\n else\n {\n display("No Internet Connection..");\n }\n }\n });\n\n } \n private class SlideMenuClickListener implements\n ListView.OnItemClickListener {\n@Override\npublic void onItemClick(AdapterView> parent, View view, int position,\n long id) {\n // display view for selected item\n displayView(position);\n}\n}\n\n@Override\npublic boolean onCreateOptionsMenu(Menu menu) {\ngetMenuInflater().inflate(R.menu.main, menu);\n//setMenuBackground();\nreturn true;\n}\n\n\n/*@Override\npublic boolean onOptionsItemSelected(MenuItem item) {\n// title/icon\nif (mDrawerToggle.onOptionsItemSelected(item)) {\n return true;\n}\n// Handle action bar actions click\nswitch (item.getItemId()) {\ncase R.id.action_settings:\n return true;\ndefault:\n return super.onOptionsItemSelected(item);\n}\n}*/\n\n//called when invalidateOptionsMenu() invoke \n\n@Override\npublic boolean onPrepareOptionsMenu(Menu menu) {\n// if Navigation drawer is opened, hide the action items\n//boolean drawerOpen = mDrawerLayout.isDrawerOpen(mDrawerList);\n//menu.findItem(R.id.action_settings).setVisible(!drawerOpen);\nreturn super.onPrepareOptionsMenu(menu);\n}\n\nprivate void displayView(int position) {\n// update the main content with called Fragment\nswitch (position) {\n\ncase 1:\n //fragment = new Fragment2Profile();\n Intent i = new Intent(pdf.this,Design_Activity.class);\n startActivity(i);\n pdf.this.finish();\n break;\ncase 2:\n //fragment = new Fragment3Logout();\n Intent z = new Intent(pdf.this,MainActivity.class);\n z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | \n Intent.FLAG_ACTIVITY_CLEAR_TASK |\n Intent.FLAG_ACTIVITY_NEW_TASK);\n startActivity(z);\n pdf.this.finish();\n break;\n\ndefault:\n break;\n}\n\n\n\n}\n\n\n\n\n public void display(String msg) \n {\n Toast.makeText(pdf.this, msg, Toast.LENGTH_LONG).show();\n }\n private void validat_user(String st)\n {\n\n WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "");\n\n wst.addNameValuePair1("TABLE_NAME", st);\n // wst.addNameValuePair("Emp_PWD", stg2);\n // db_select=stg1;\n //display("I am");\n wst.execute(new String[] { SERVICE_URL });\n //display(SERVICE_URL);\n\n }\n private void validat_user1(String stg1,String stg2)\n {\n db_select=stg1;\n WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "Loading...");\n\n wst.addNameValuePair1("PDF_NAME", stg1);\n wst.addNameValuePair1("TABLE_NAME1", stg2);\n wst.execute(new String[] { SERVICE_URL1 });\n\n }\n @SuppressWarnings("deprecation")\n public void no_net()\n {\n display( "No Network Connection");\n final AlertDialog alertDialog = new AlertDialog.Builder(pdf.this).create();\n alertDialog.setTitle("No Internet Connection");\n alertDialog.setMessage("You don't have internet connection.\nElse please check the Internet Connection Settings.");\n //alertDialog.setIcon(R.drawable.error_info);\n alertDialog.setCancelable(false);\n alertDialog.setButton("Close", new DialogInterface.OnClickListener() \n {\n public void onClick(DialogInterface dialog, int which)\n { \n alertDialog.cancel();\n pdf.this.finish();\n System.exit(0);\n }\n });\n alertDialog.setButton2("Use Local DataBase", new DialogInterface.OnClickListener() \n {\n public void onClick(DialogInterface dialog, int which)\n {\n display( "Accessing local DataBase.....");\n alertDialog.cancel();\n }\n });\n alertDialog.show();\n }\n\n private class WebServiceTask extends AsyncTask {\n\n public static final int POST_TASK = 1;\n\n private static final String TAG = "WebServiceTask";\n\n // connection timeout, in milliseconds (waiting to connect)\n private static final int CONN_TIMEOUT = 12000;\n\n // socket timeout, in milliseconds (waiting for data)\n private static final int SOCKET_TIMEOUT = 12000;\n\n private int taskType = POST_TASK;\n private Context mContext = null;\n private String processMessage = "Processing...";\n\n private ArrayList params = new ArrayList();\n\n private ProgressDialog pDlg = null;\n\n public WebServiceTask(int taskType, Context mContext, String processMessage) {\n\n this.taskType = taskType;\n this.mContext = mContext;\n this.processMessage = processMessage;\n }\n\n public void addNameValuePair1(String name, String value) {\n\n params.add(new BasicNameValuePair(name, value));\n }\n @SuppressWarnings("deprecation")\n private void showProgressDialog() {\n\n pDlg = new ProgressDialog(mContext);\n pDlg.setMessage(processMessage);\n pDlg.setProgressDrawable(mContext.getWallpaper());\n pDlg.setProgressStyle(ProgressDialog.STYLE_SPINNER);\n pDlg.setCancelable(false);\n pDlg.show();\n\n }\n\n @Override\n protected void onPreExecute() {\n\n showProgressDialog();\n\n }\n\n protected String doInBackground(String... urls) {\n\n String url = urls[0];\n String result = "";\n\n HttpResponse response = doResponse(url);\n\n if (response == null) {\n return result;\n } else {\n\n try {\n\n result = inputStreamToString(response.getEntity().getContent());\n\n } catch (IllegalStateException e) {\n Log.e(TAG, e.getLocalizedMessage(), e);\n\n } catch (IOException e) {\n Log.e(TAG, e.getLocalizedMessage(), e);\n }\n\n }\n\n return result;\n }\n\n @Override\n protected void onPostExecute(String response) {\n\n handleResponse(response);\n pDlg.dismiss();\n\n }\n\n\n // Establish connection and socket (data retrieval) timeouts\n private HttpParams getHttpParams() {\n\n HttpParams htpp = new BasicHttpParams();\n\n HttpConnectionParams.setConnectionTimeout(htpp, CONN_TIMEOUT);\n HttpConnectionParams.setSoTimeout(htpp, SOCKET_TIMEOUT);\n\n return htpp;\n }\n\n private HttpResponse doResponse(String url) {\n\n // Use our connection and data timeouts as parameters for our\n // DefaultHttpClient\n HttpClient httpclient = new DefaultHttpClient(getHttpParams());\n\n HttpResponse response = null;\n\n try {\n switch (taskType) {\n\n case POST_TASK:\n HttpPost httppost = new HttpPost(url);\n // Add parameters\n httppost.setEntity(new UrlEncodedFormEntity(params));\n\n response = httpclient.execute(httppost);\n break;\n }\n } catch (Exception e) {\n display("Remote DataBase can not be connected.\nPlease check network connection.");\n\n Log.e(TAG, e.getLocalizedMessage(), e);\n return null;\n\n }\n\n return response;\n }\n\n private String inputStreamToString(InputStream is) {\n\n String line = "";\n StringBuilder total = new StringBuilder();\n\n // Wrap a BufferedReader around the InputStream\n BufferedReader rd = new BufferedReader(new InputStreamReader(is));\n\n try {\n // Read response until the end\n while ((line = rd.readLine()) != null) {\n total.append(line);\n }\n } catch (IOException e) {\n Log.e(TAG, e.getLocalizedMessage(), e);\n }\n\n // Return full string\n return total.toString();\n }\n\n }\n public void handleResponse(String response) \n { //display("JSON responce is : "+response);\n if(!response.equals(""))\n {\n try {\n\n JSONObject jso = new JSONObject(response);\n\n\n int UName = jso.getInt("status1");\n\n if(UName==1)\n {\n String status = jso.getString("reps1");\n ret=status.substring(12,status.length()-2);\n todoItems.add(0, ret);\n aa.notifyDataSetChanged();\n }\n else if(UName==-1)\n {\n String status = jso.getString("status");\n //ret=status.substring(12,status.length()-2);\n //display(status);\n Intent intObj=new Intent(pdf.this,Webview.class);\n intObj.putExtra("USERNAME",status);\n startActivity(intObj);\n }\n else if(UName>1)\n {\n// int count=Integer.parseInt(UName);\n// display("Number of Projects have been handling in AFL right now: "+count);\n list1=new ArrayList();\n\n JSONArray array=jso.getJSONArray("reps1");\n for(int i=0;i parent, View view, int position,\n long id) {\n // display view for selected item\n displayView(position);\n }\n }\n\n\n private void displayView(int position) {\n // update the main content with called Fragment\n // Fragment fragment = null;\n switch (position) {\n case 0:\n // fragment = new Fragment1User();\n break;\n case 1:\n // fragment = new Fragment2Profile();\n break;\n case 2:\n // fragment = new Fragment3Logout();\n break;\n\n default:\n break;\n }\n }*/\n\n\n}\n
\n
soup wrap:
Call notifyDataSetChanged() on your Adapter.
Some additional specifics on how/when to call notifyDataSetChanged() can be viewed in this Google I/O video.
Use a Handler and its postDelayed method to invalidate the list's adapter as follows:
final Handler handler = new Handler()
handler.postDelayed( new Runnable() {
@Override
public void run() {
adapter.notifyDataSetChanged();
handler.postDelayed( this, 60 * 1000 );
}
}, 60 * 1000 );
You must only update UI in the main (UI) thread.
By creating the handler in the main thread, you ensure that everything you post to the handler is run in the main thread also.
try
{
validat_user(receivedName);
final Handler handler = new Handler();
handler.postDelayed( new Runnable() {
@Override
public void run() {
todoItems.clear();
//alertDialog.cancel();
validat_user(receivedName);
handler.postDelayed( this, 60 * 1000 );
}
}, 60 * 1000 );
}
catch(Exception e)
{
display("Network error.\nPlease check with your network settings.");
}
First validate user is first time load the data ,after using handler i can update the values every one minute
my full code is below
package com.example.employeeinduction;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import org.apache.http.HttpResponse;
import org.apache.http.NameValuePair;
import org.apache.http.client.HttpClient;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.params.BasicHttpParams;
import org.apache.http.params.HttpConnectionParams;
import org.apache.http.params.HttpParams;
import org.json.JSONArray;
import org.json.JSONObject;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.ProgressDialog;
import android.content.Context;
import android.content.DialogInterface;
import android.content.Intent;
import android.content.res.TypedArray;
import android.os.AsyncTask;
import android.os.Bundle;
import android.os.Handler;
import android.support.v4.widget.DrawerLayout;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.AdapterView;
import android.widget.AdapterView.OnItemClickListener;
import android.widget.ArrayAdapter;
import android.widget.ImageView;
import android.widget.ListView;
import android.widget.PopupMenu;
import android.widget.PopupMenu.OnMenuItemClickListener;
import android.widget.Toast;
public class pdf extends Activity
{
ImageView iv;
public boolean connect=false,logged=false;
public String db_select;
ListView l1;
AlertDialog alertDialog;
String mPwd,UName1="Success",UName,ret,receivedName;
public Iterator itr;
//private String SERVICE_URL = "http://61.12.7.197:8080/pdf";
//private String SERVICE_URL1 = "http://61.12.7.197:8080/url";
//private final String SERVICE_URL = "http://10.54.3.208:8080/Employee/person/pdf";
//private final String SERVICE_URL1 = "http://10.54.3.208:8080/Employee/person/url";
private final String SERVICE_URL = Urlmanager.Address+"pdf";
private final String SERVICE_URL1 = Urlmanager.Address+"url";
private final String TAG = "Pdf";
ArrayList todoItems;
Boolean isInternetPresent = false;
ConnectionDetector cd;
ArrayAdapter aa;
public List list1=new ArrayList();
public DrawerLayout mDrawerLayout;
public ListView mDrawerList;
//public ActionBarDrawerToggle mDrawerToggle;
// NavigationDrawer title "Nasdaq" in this example
public CharSequence mDrawerTitle;
// App title "Navigation Drawer" in this example
public CharSequence mTitle;
// slider menu items details
public String[] navMenuTitles=null;
public TypedArray navMenuIcons;
public ArrayList navDrawerItems;
public NavDrawerListAdapter adapter;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.sliding_project);
iv = (ImageView)findViewById(R.id.imageView2);
l1 = (ListView)findViewById(R.id.list);
mTitle = mDrawerTitle = getTitle();
// getting items of slider from array
navMenuTitles = getResources().getStringArray(R.array.nav_drawer_items);
// getting Navigation drawer icons from res
navMenuIcons = getResources()
.obtainTypedArray(R.array.nav_drawer_icons);
mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
mDrawerList = (ListView) findViewById(R.id.list_slidermenu);
navDrawerItems = new ArrayList();
// list item in slider at 1 Home Nasdaq details
navDrawerItems.add(new NavDrawerItem(navMenuTitles[0], navMenuIcons.getResourceId(0, -1)));
// list item in slider at 2 Facebook details
navDrawerItems.add(new NavDrawerItem(navMenuTitles[1], navMenuIcons.getResourceId(1, -1)));
// list item in slider at 3 Google details
navDrawerItems.add(new NavDrawerItem(navMenuTitles[2], navMenuIcons.getResourceId(2, -1)));
// list item in slider at 4 Apple details
// Recycle array
navMenuIcons.recycle();
mDrawerList.setOnItemClickListener(new SlideMenuClickListener());
// setting list adapter for Navigation Drawer
adapter = new NavDrawerListAdapter(getApplicationContext(),
navDrawerItems);
mDrawerList.setAdapter(adapter);
if (savedInstanceState == null) {
displayView(0);
}
iv.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
PopupMenu popup = new PopupMenu(getBaseContext(), v);
/** Adding menu items to the popumenu */
popup.getMenuInflater().inflate(R.menu.main, popup.getMenu());
popup.setOnMenuItemClickListener(new OnMenuItemClickListener() {
@Override
public boolean onMenuItemClick(MenuItem item) {
switch (item.getItemId()){
case R.id.Home:
Intent a = new Intent(pdf.this,Design_Activity.class);
startActivity(a);
//Projects_Accel.this.finish();
// return true;
break;
case R.id.Logout:
/*Intent z = new Intent(this,MainActivity.class);
z.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
startActivity(z);
this.finish();*/
Intent z = new Intent(pdf.this,MainActivity.class);
z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP |
Intent.FLAG_ACTIVITY_CLEAR_TASK |
Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(z);
pdf.this.finish();
// return true;
break;
}
return true;
}
});
popup.show();
}
});
todoItems = new ArrayList();
aa = new ArrayAdapter(this,R.layout.list_row,R.id.title,todoItems);
l1.setAdapter(aa);
todoItems.clear();
Intent intent = getIntent();
receivedName = (String) intent.getSerializableExtra("PROJECT");
cd = new ConnectionDetector(getApplicationContext());
isInternetPresent = cd.isConnectingToInternet();
if(isInternetPresent)
{
try
{
validat_user(receivedName);
final Handler handler = new Handler();
handler.postDelayed( new Runnable() {
@Override
public void run() {
todoItems.clear();
//alertDialog.cancel();
validat_user(receivedName);
handler.postDelayed( this, 60 * 1000 );
}
}, 60 * 1000 );
}
catch(Exception e)
{
display("Network error.\nPlease check with your network settings.");
}
}
else
{
display("No Internet Connection..");
}
l1.setOnItemClickListener(new OnItemClickListener() {
public void onItemClick(AdapterView> parent, View view,
int position, long id) {
String name=(String)parent.getItemAtPosition(position);
/*Toast.makeText(getBaseContext(), name, Toast.LENGTH_LONG).show();
Intent i = new Intent(getBaseContext(),Webview.class);
i.putExtra("USERNAME", name);
startActivity(i);*/
cd = new ConnectionDetector(getApplicationContext());
isInternetPresent = cd.isConnectingToInternet();
if(isInternetPresent)
{
try
{
validat_user1(receivedName,name);
}
catch(Exception e)
{
display("Network error.\nPlease check with your network settings.");
}
}
else
{
display("No Internet Connection..");
}
}
});
}
private class SlideMenuClickListener implements
ListView.OnItemClickListener {
@Override
public void onItemClick(AdapterView> parent, View view, int position,
long id) {
// display view for selected item
displayView(position);
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.main, menu);
//setMenuBackground();
return true;
}
/*@Override
public boolean onOptionsItemSelected(MenuItem item) {
// title/icon
if (mDrawerToggle.onOptionsItemSelected(item)) {
return true;
}
// Handle action bar actions click
switch (item.getItemId()) {
case R.id.action_settings:
return true;
default:
return super.onOptionsItemSelected(item);
}
}*/
//called when invalidateOptionsMenu() invoke
@Override
public boolean onPrepareOptionsMenu(Menu menu) {
// if Navigation drawer is opened, hide the action items
//boolean drawerOpen = mDrawerLayout.isDrawerOpen(mDrawerList);
//menu.findItem(R.id.action_settings).setVisible(!drawerOpen);
return super.onPrepareOptionsMenu(menu);
}
private void displayView(int position) {
// update the main content with called Fragment
switch (position) {
case 1:
//fragment = new Fragment2Profile();
Intent i = new Intent(pdf.this,Design_Activity.class);
startActivity(i);
pdf.this.finish();
break;
case 2:
//fragment = new Fragment3Logout();
Intent z = new Intent(pdf.this,MainActivity.class);
z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP |
Intent.FLAG_ACTIVITY_CLEAR_TASK |
Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(z);
pdf.this.finish();
break;
default:
break;
}
}
public void display(String msg)
{
Toast.makeText(pdf.this, msg, Toast.LENGTH_LONG).show();
}
private void validat_user(String st)
{
WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "");
wst.addNameValuePair1("TABLE_NAME", st);
// wst.addNameValuePair("Emp_PWD", stg2);
// db_select=stg1;
//display("I am");
wst.execute(new String[] { SERVICE_URL });
//display(SERVICE_URL);
}
private void validat_user1(String stg1,String stg2)
{
db_select=stg1;
WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "Loading...");
wst.addNameValuePair1("PDF_NAME", stg1);
wst.addNameValuePair1("TABLE_NAME1", stg2);
wst.execute(new String[] { SERVICE_URL1 });
}
@SuppressWarnings("deprecation")
public void no_net()
{
display( "No Network Connection");
final AlertDialog alertDialog = new AlertDialog.Builder(pdf.this).create();
alertDialog.setTitle("No Internet Connection");
alertDialog.setMessage("You don't have internet connection.\nElse please check the Internet Connection Settings.");
//alertDialog.setIcon(R.drawable.error_info);
alertDialog.setCancelable(false);
alertDialog.setButton("Close", new DialogInterface.OnClickListener()
{
public void onClick(DialogInterface dialog, int which)
{
alertDialog.cancel();
pdf.this.finish();
System.exit(0);
}
});
alertDialog.setButton2("Use Local DataBase", new DialogInterface.OnClickListener()
{
public void onClick(DialogInterface dialog, int which)
{
display( "Accessing local DataBase.....");
alertDialog.cancel();
}
});
alertDialog.show();
}
private class WebServiceTask extends AsyncTask {
public static final int POST_TASK = 1;
private static final String TAG = "WebServiceTask";
// connection timeout, in milliseconds (waiting to connect)
private static final int CONN_TIMEOUT = 12000;
// socket timeout, in milliseconds (waiting for data)
private static final int SOCKET_TIMEOUT = 12000;
private int taskType = POST_TASK;
private Context mContext = null;
private String processMessage = "Processing...";
private ArrayList params = new ArrayList();
private ProgressDialog pDlg = null;
public WebServiceTask(int taskType, Context mContext, String processMessage) {
this.taskType = taskType;
this.mContext = mContext;
this.processMessage = processMessage;
}
public void addNameValuePair1(String name, String value) {
params.add(new BasicNameValuePair(name, value));
}
@SuppressWarnings("deprecation")
private void showProgressDialog() {
pDlg = new ProgressDialog(mContext);
pDlg.setMessage(processMessage);
pDlg.setProgressDrawable(mContext.getWallpaper());
pDlg.setProgressStyle(ProgressDialog.STYLE_SPINNER);
pDlg.setCancelable(false);
pDlg.show();
}
@Override
protected void onPreExecute() {
showProgressDialog();
}
protected String doInBackground(String... urls) {
String url = urls[0];
String result = "";
HttpResponse response = doResponse(url);
if (response == null) {
return result;
} else {
try {
result = inputStreamToString(response.getEntity().getContent());
} catch (IllegalStateException e) {
Log.e(TAG, e.getLocalizedMessage(), e);
} catch (IOException e) {
Log.e(TAG, e.getLocalizedMessage(), e);
}
}
return result;
}
@Override
protected void onPostExecute(String response) {
handleResponse(response);
pDlg.dismiss();
}
// Establish connection and socket (data retrieval) timeouts
private HttpParams getHttpParams() {
HttpParams htpp = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(htpp, CONN_TIMEOUT);
HttpConnectionParams.setSoTimeout(htpp, SOCKET_TIMEOUT);
return htpp;
}
private HttpResponse doResponse(String url) {
// Use our connection and data timeouts as parameters for our
// DefaultHttpClient
HttpClient httpclient = new DefaultHttpClient(getHttpParams());
HttpResponse response = null;
try {
switch (taskType) {
case POST_TASK:
HttpPost httppost = new HttpPost(url);
// Add parameters
httppost.setEntity(new UrlEncodedFormEntity(params));
response = httpclient.execute(httppost);
break;
}
} catch (Exception e) {
display("Remote DataBase can not be connected.\nPlease check network connection.");
Log.e(TAG, e.getLocalizedMessage(), e);
return null;
}
return response;
}
private String inputStreamToString(InputStream is) {
String line = "";
StringBuilder total = new StringBuilder();
// Wrap a BufferedReader around the InputStream
BufferedReader rd = new BufferedReader(new InputStreamReader(is));
try {
// Read response until the end
while ((line = rd.readLine()) != null) {
total.append(line);
}
} catch (IOException e) {
Log.e(TAG, e.getLocalizedMessage(), e);
}
// Return full string
return total.toString();
}
}
public void handleResponse(String response)
{ //display("JSON responce is : "+response);
if(!response.equals(""))
{
try {
JSONObject jso = new JSONObject(response);
int UName = jso.getInt("status1");
if(UName==1)
{
String status = jso.getString("reps1");
ret=status.substring(12,status.length()-2);
todoItems.add(0, ret);
aa.notifyDataSetChanged();
}
else if(UName==-1)
{
String status = jso.getString("status");
//ret=status.substring(12,status.length()-2);
//display(status);
Intent intObj=new Intent(pdf.this,Webview.class);
intObj.putExtra("USERNAME",status);
startActivity(intObj);
}
else if(UName>1)
{
// int count=Integer.parseInt(UName);
// display("Number of Projects have been handling in AFL right now: "+count);
list1=new ArrayList();
JSONArray array=jso.getJSONArray("reps1");
for(int i=0;i parent, View view, int position,
long id) {
// display view for selected item
displayView(position);
}
}
private void displayView(int position) {
// update the main content with called Fragment
// Fragment fragment = null;
switch (position) {
case 0:
// fragment = new Fragment1User();
break;
case 1:
// fragment = new Fragment2Profile();
break;
case 2:
// fragment = new Fragment3Logout();
break;
default:
break;
}
}*/
}
qid & accept id:
(23129852, 23174570)
query:
How to use another table fields as a criteria for MS Access
soup:
The 2nd problem is a bit more difficult than the 1st. My approach would be to use 3 separate queries to get the answer:
\nQuery1 returns a record for each record in the original table, adding the year and quarter from the quarters table. Note that instead of using the quarters table, you could just as easily calculate the year and quarter from the date.
\nSELECT Table.FName, Table.FValue, Table.VDate, Quarters.Yr, Quarters.Qtr\nFROM [Table], Quarters\nWHERE (((Table.VDate)>=[start] And (Table.VDate)<=[end]));\n
\nQuery2 uses the results of Query1 and finds the minimum values you need:
\nSELECT Query1.FName, Query1.Yr, Query1.Qtr, Min(Query1.FValue) AS MinValue\nFROM Query1\nGROUP BY Query1.FName, Query1.Yr, Query1.Qtr;\n
\nQuery3 matches the results of Query1 and Query2 to show the data on which the minimum value was reached. Note that I made this a Sum query and used First(VDate), assumining that the minimum value may have occurred more than once and you need only the 1st time it happened.
\nSELECT Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, First(Query1.VDate) AS MidDate, Query1.FValue\nFROM Query1 INNER JOIN Query2 ON (Query1.Qtr = Query2.Qtr) AND (Query1.FValue = Query2.MinValue) AND (Query1.FName = Query2.FName)\nGROUP BY Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, Query1.FValue;\n
\nThere's probably a clever way to do this all in one query, but this is the way usually solve similar problems.
\n
soup wrap:
The 2nd problem is a bit more difficult than the 1st. My approach would be to use 3 separate queries to get the answer:
Query1 returns a record for each record in the original table, adding the year and quarter from the quarters table. Note that instead of using the quarters table, you could just as easily calculate the year and quarter from the date.
SELECT Table.FName, Table.FValue, Table.VDate, Quarters.Yr, Quarters.Qtr
FROM [Table], Quarters
WHERE (((Table.VDate)>=[start] And (Table.VDate)<=[end]));
Query2 uses the results of Query1 and finds the minimum values you need:
SELECT Query1.FName, Query1.Yr, Query1.Qtr, Min(Query1.FValue) AS MinValue
FROM Query1
GROUP BY Query1.FName, Query1.Yr, Query1.Qtr;
Query3 matches the results of Query1 and Query2 to show the data on which the minimum value was reached. Note that I made this a Sum query and used First(VDate), assumining that the minimum value may have occurred more than once and you need only the 1st time it happened.
SELECT Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, First(Query1.VDate) AS MidDate, Query1.FValue
FROM Query1 INNER JOIN Query2 ON (Query1.Qtr = Query2.Qtr) AND (Query1.FValue = Query2.MinValue) AND (Query1.FName = Query2.FName)
GROUP BY Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, Query1.FValue;
There's probably a clever way to do this all in one query, but this is the way usually solve similar problems.
qid & accept id:
(23146750, 23147525)
query:
List records with duplicate values
soup:
If you have table Projects then you can correct your query as follows:
\nselect\n projectId,\n IDs = STUFF(\n (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' \n FROM ProjectDetail g2\n WHERE g2.recordType=1\n and g1.value=g2.value\n and g1.recordType=g2.recordType\n and g1.projectId=g2.projectIdand\n and g2.auditDate > '01-01-2014'\n For XML PATH('')\n ),1,1,'')\nFROM Projects P\nWHERE EXISTS (select projectID\n from ProjectDetail PD ON P.projectID=PD.ProjectID\n having count(*)>1)\n
\nOR without table Projects
\n select\n projectId,\n IDs = STUFF(\n (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' \n FROM ProjectDetail g2\n WHERE g2.recordType=1\n and g1.value=g2.value\n and g1.recordType=g2.recordType\n and g1.projectId=g2.projectIdand\n and g2.auditDate > '01-01-2014'\n For XML PATH('')\n ),1,1,'')\n FROM (select projectID\n from ProjectDetail PD\n having count(*)>1) P\n
\n
soup wrap:
If you have table Projects then you can correct your query as follows:
select
projectId,
IDs = STUFF(
(SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()'
FROM ProjectDetail g2
WHERE g2.recordType=1
and g1.value=g2.value
and g1.recordType=g2.recordType
and g1.projectId=g2.projectIdand
and g2.auditDate > '01-01-2014'
For XML PATH('')
),1,1,'')
FROM Projects P
WHERE EXISTS (select projectID
from ProjectDetail PD ON P.projectID=PD.ProjectID
having count(*)>1)
OR without table Projects
select
projectId,
IDs = STUFF(
(SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()'
FROM ProjectDetail g2
WHERE g2.recordType=1
and g1.value=g2.value
and g1.recordType=g2.recordType
and g1.projectId=g2.projectIdand
and g2.auditDate > '01-01-2014'
For XML PATH('')
),1,1,'')
FROM (select projectID
from ProjectDetail PD
having count(*)>1) P
qid & accept id:
(23151081, 23151380)
query:
SQL Server: compare two columns in Select and count matches
soup:
It' a bit different, but I would try something like this:
\nSELECT a.col1, a.total_count, b.match_count,\n (100*b.match_count/a.total_count) AS match_percentage\nFROM (\n SELECT col1, COUNT(*) AS total_count\n FROM LogTable\n WHERE Category LIKE '2014-04%'\n GROUP BY col1\n) a\nJOIN (\n SELECT col1, COUNT(*) AS match_count\n FROM LogTable\n WHERE Category LIKE '2014-04%' AND col2=col3\n GROUP BY col1\n) b ON a.col1=b.col1\n
\nAs an alternative... this should give the same result. Not sure which would be more efficient:
\nSELECT col1, total_count,\n (SELECT COUNT(*)\n FROM LogTable\n WHERE Category LIKE '2014-04%' AND col1=a.col1 AND col2=col3\n ) AS match_count,\n (100*match_count/total_count) AS match_percentage\nFROM (\n SELECT col1, COUNT(*) AS total_count\n FROM LogTable\n WHERE Category LIKE '2014-04%'\n GROUP BY col1\n) a\n
\nBut... beware... I'm not sure all engines are able to reference the subselect column match_count directly in the expression used to build the match_percentage column.
\n
soup wrap:
It' a bit different, but I would try something like this:
SELECT a.col1, a.total_count, b.match_count,
(100*b.match_count/a.total_count) AS match_percentage
FROM (
SELECT col1, COUNT(*) AS total_count
FROM LogTable
WHERE Category LIKE '2014-04%'
GROUP BY col1
) a
JOIN (
SELECT col1, COUNT(*) AS match_count
FROM LogTable
WHERE Category LIKE '2014-04%' AND col2=col3
GROUP BY col1
) b ON a.col1=b.col1
As an alternative... this should give the same result. Not sure which would be more efficient:
SELECT col1, total_count,
(SELECT COUNT(*)
FROM LogTable
WHERE Category LIKE '2014-04%' AND col1=a.col1 AND col2=col3
) AS match_count,
(100*match_count/total_count) AS match_percentage
FROM (
SELECT col1, COUNT(*) AS total_count
FROM LogTable
WHERE Category LIKE '2014-04%'
GROUP BY col1
) a
But... beware... I'm not sure all engines are able to reference the subselect column match_count directly in the expression used to build the match_percentage column.
qid & accept id:
(23151241, 23151281)
query:
Create row in table with only auto generated fields - SQL
soup:
The SQL standard and most databases support the DEFAULT VALUES clause for this:
\nINSERT INTO "MIGRATION"."VERSION" DEFAULT VALUES;\n
\nThis is supported in
\n\n- CUBRID
\n- Firebird
\n- H2
\n- HSQLDB
\n- Ingres
\n- PostgreSQL
\n- SQLite
\n- SQL Server
\n- Sybase SQL Anywhere
\n
\nIf the above is not supported, you can still write this statement as a workaround. In fact, the first is specified by the SQL standard to be equivalent to the second:
\nINSERT INTO "MIGRATION"."VERSION" (ID, VERSION_DATE) VALUES (DEFAULT, DEFAULT);\n
\nThis will then also work with:
\n\n- Access
\n- DB2
\n- MariaDB
\n- MySQL
\n- Oracle
\n
\nFor more details, see this blog post here:
\nhttp://blog.jooq.org/2014/01/08/lesser-known-sql-features-default-values/
\n
soup wrap:
The SQL standard and most databases support the DEFAULT VALUES clause for this:
INSERT INTO "MIGRATION"."VERSION" DEFAULT VALUES;
This is supported in
- CUBRID
- Firebird
- H2
- HSQLDB
- Ingres
- PostgreSQL
- SQLite
- SQL Server
- Sybase SQL Anywhere
If the above is not supported, you can still write this statement as a workaround. In fact, the first is specified by the SQL standard to be equivalent to the second:
INSERT INTO "MIGRATION"."VERSION" (ID, VERSION_DATE) VALUES (DEFAULT, DEFAULT);
This will then also work with:
- Access
- DB2
- MariaDB
- MySQL
- Oracle
For more details, see this blog post here:
http://blog.jooq.org/2014/01/08/lesser-known-sql-features-default-values/
qid & accept id:
(23166266, 23195715)
query:
Procedure to insert data from one column into two columns in another table
soup:
Building a Looping PL/SQL Based DML Cursor For Multiple DML Targets
\nA PL/SQL Stored Procedure is a great way to accomplish your task. An alternate approach to breaking down your single name field into FIRST NAME and LAST NAME components could be to use an Oracle Regular Expression, as in:
\nSELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 1) from dual\n-- Result: MYFIRST\n\nSELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 2) from dual\n-- Result: MYLAST\n
\nA procedure based approach is a good idea; first wrap this query into a cursor definition. Integrate the cursor within a complete PL/SQL stored procedure DDL script.
\nCREATE or REPLACE PROCEDURE PROC_MYNAME_IMPORT IS\n\n -- Queries parsed name values from STAFF (the source) table \n\n CURSOR name_cursor IS\n SELECT REGEXP_SUBSTR(staff.name,...) as FirstName,\n REGEXP_SUBSTR(... ) as LastName\n FROM STAFF;\n\n BEGIN\n\n FOR i IN name_cursor LOOP\n\n --DML Command 1:\n INSERT INTO Table_One ( first_name, last_name )\n VALUES (i.FirstName, i.LastName);\n COMMIT;\n\n --DML Command 2:\n INSERT INTO Table_Two ...\n COMMIT;\n\n END LOOP;\n\n END proc_myname_import;\n
\nAs you can see from the example block, a long series of DML statements can take place (not just two) for a given cursor record and its values as it is handled by each loop iteration. Each field may be referenced by the name assigned to them within the cursor SQL statement. There is a '.' (dot) notation where the handle assigned to the cursor call is the prefix, as in:
\nCURSOR c1 IS\n SELECT st.col1, st.col2, st.col3\n FROM sample_table st\n WHERE ...\n
\nThen the cursor call for looping through the main record set:
\nFOR my_personal_loop IN c1 LOOP\n ...do this\n ...do that\n\n INSERT INTO some_other_table (column_one, column_two, column_three)\n VALUES (my_personal_loop.col1, my_personal_loop.col2, ...);\n\n COMMIT;\nEND LOOP;\n\n... and so on.\n
\n
soup wrap:
Building a Looping PL/SQL Based DML Cursor For Multiple DML Targets
A PL/SQL Stored Procedure is a great way to accomplish your task. An alternate approach to breaking down your single name field into FIRST NAME and LAST NAME components could be to use an Oracle Regular Expression, as in:
SELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 1) from dual
-- Result: MYFIRST
SELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 2) from dual
-- Result: MYLAST
A procedure based approach is a good idea; first wrap this query into a cursor definition. Integrate the cursor within a complete PL/SQL stored procedure DDL script.
CREATE or REPLACE PROCEDURE PROC_MYNAME_IMPORT IS
-- Queries parsed name values from STAFF (the source) table
CURSOR name_cursor IS
SELECT REGEXP_SUBSTR(staff.name,...) as FirstName,
REGEXP_SUBSTR(... ) as LastName
FROM STAFF;
BEGIN
FOR i IN name_cursor LOOP
--DML Command 1:
INSERT INTO Table_One ( first_name, last_name )
VALUES (i.FirstName, i.LastName);
COMMIT;
--DML Command 2:
INSERT INTO Table_Two ...
COMMIT;
END LOOP;
END proc_myname_import;
As you can see from the example block, a long series of DML statements can take place (not just two) for a given cursor record and its values as it is handled by each loop iteration. Each field may be referenced by the name assigned to them within the cursor SQL statement. There is a '.' (dot) notation where the handle assigned to the cursor call is the prefix, as in:
CURSOR c1 IS
SELECT st.col1, st.col2, st.col3
FROM sample_table st
WHERE ...
Then the cursor call for looping through the main record set:
FOR my_personal_loop IN c1 LOOP
...do this
...do that
INSERT INTO some_other_table (column_one, column_two, column_three)
VALUES (my_personal_loop.col1, my_personal_loop.col2, ...);
COMMIT;
END LOOP;
... and so on.
qid & accept id:
(23176321, 23178314)
query:
"Convert" Entity Framework program to raw SQL
soup:
I was there and the good news is you don't have to give up Entity Framework if you don't want to. The bad news is you have to update the database yourself. Which isn't as hard as it seems. I'm currently using EF 5 but plan to go to EF 6. I don't see why this still wouldn't work for EF 6.
\nFirst thing is in the constructor of the DbContext cast it to IObjectContextAdapter and get access to the ObjectContext. I make a property for this
\npublic virtual ObjectContext ObjContext\n{\n get\n {\n return ((IObjectContextAdapter)this).ObjectContext;\n }\n}\n
\nOnce you have that subscribe to the SavingChanges event - this isn't our exact code some things are copied out of other methods and redone. This just gives you an idea of what you need to do.
\nObjContext.SavingChanges += SaveData;\n\nprivate void SaveData(object sender, EventArgs e)\n{\n var context = sender as ObjectContext;\n if (context != null)\n {\n context.DetectChanges();\n var tsql = new StringBuilder();\n var dbParams = new List>();\n\n var deletedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted);\n foreach (var delete in deletedEntites)\n {\n // Set state to unchanged - so entity framework will ignore\n delete.ChangeState(EntityState.Unchanged);\n // Method to generate tsql for deleting entities\n DeleteData(delete, tsql, dbParams);\n }\n\n var addedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);\n foreach (var add in addedEntites)\n {\n // Set state to unchanged - so entity framework will ignore\n add.ChangeState(EntityState.Unchanged);\n // Method to generate tsql for added entities\n AddData(add, tsql, dbParams);\n }\n\n var editedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);\n foreach (var edit in editedEntites)\n {\n // Method to generate tsql for updating entities\n UpdateEditData(edit, tsql, dbParams);\n // Set state to unchanged - so entity framework will ignore\n edit.ChangeState(EntityState.Unchanged);\n }\n if (!tsql.ToString().IsEmpty())\n {\n var dbcommand = Database.Connection.CreateCommand();\n dbcommand.CommandText = tsql.ToString();\n\n foreach (var dbParameter in dbParams)\n {\n var dbparam = dbcommand.CreateParameter();\n dbparam.ParameterName = dbParameter.Key;\n dbparam.Value = dbParameter.Value;\n dbcommand.Parameters.Add(dbparam);\n }\n var results = dbcommand.ExecuteNonQuery();\n }\n }\n}\n
\nWhy we set the entity to unmodified after the update because you can do
\nvar changed properties = edit.GetModifiedProperties();\n
\nto get a list of all the changed properties. Since all the entities are now marked as unchanged EF will not send any updates to SQL.
\nYou will also need to mess with the metadata to go from entity to table and property to fields. This isn't that hard to do but messing the metadata does take some time to learn. Something I still struggle with sometimes. I refactored all that out into an IMetaDataHelper interface where I pass it in the entity type and property name to get the table and field back - along with caching the result so I don't have to query metadata all the time.
\nAt the end the tsql is a batch that has all the T-SQL how we want it with the locking hints and containing the transaction level. We also change numeric fields from just being set to nfield = 10 but to be nfield = nfield + 2 in the TSQL if the user updated them by 2 to avoid the concurrency issue as well.
\nWhat you wont get to is having SQL locked once someone starts to edit your entity but I don't see how you would get that with stored procedures as well.
\nAll in all it took me about 2 solid days to get this all up and running for us.
\n
soup wrap:
I was there and the good news is you don't have to give up Entity Framework if you don't want to. The bad news is you have to update the database yourself. Which isn't as hard as it seems. I'm currently using EF 5 but plan to go to EF 6. I don't see why this still wouldn't work for EF 6.
First thing is in the constructor of the DbContext cast it to IObjectContextAdapter and get access to the ObjectContext. I make a property for this
public virtual ObjectContext ObjContext
{
get
{
return ((IObjectContextAdapter)this).ObjectContext;
}
}
Once you have that subscribe to the SavingChanges event - this isn't our exact code some things are copied out of other methods and redone. This just gives you an idea of what you need to do.
ObjContext.SavingChanges += SaveData;
private void SaveData(object sender, EventArgs e)
{
var context = sender as ObjectContext;
if (context != null)
{
context.DetectChanges();
var tsql = new StringBuilder();
var dbParams = new List>();
var deletedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted);
foreach (var delete in deletedEntites)
{
// Set state to unchanged - so entity framework will ignore
delete.ChangeState(EntityState.Unchanged);
// Method to generate tsql for deleting entities
DeleteData(delete, tsql, dbParams);
}
var addedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
foreach (var add in addedEntites)
{
// Set state to unchanged - so entity framework will ignore
add.ChangeState(EntityState.Unchanged);
// Method to generate tsql for added entities
AddData(add, tsql, dbParams);
}
var editedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);
foreach (var edit in editedEntites)
{
// Method to generate tsql for updating entities
UpdateEditData(edit, tsql, dbParams);
// Set state to unchanged - so entity framework will ignore
edit.ChangeState(EntityState.Unchanged);
}
if (!tsql.ToString().IsEmpty())
{
var dbcommand = Database.Connection.CreateCommand();
dbcommand.CommandText = tsql.ToString();
foreach (var dbParameter in dbParams)
{
var dbparam = dbcommand.CreateParameter();
dbparam.ParameterName = dbParameter.Key;
dbparam.Value = dbParameter.Value;
dbcommand.Parameters.Add(dbparam);
}
var results = dbcommand.ExecuteNonQuery();
}
}
}
Why we set the entity to unmodified after the update because you can do
var changed properties = edit.GetModifiedProperties();
to get a list of all the changed properties. Since all the entities are now marked as unchanged EF will not send any updates to SQL.
You will also need to mess with the metadata to go from entity to table and property to fields. This isn't that hard to do but messing the metadata does take some time to learn. Something I still struggle with sometimes. I refactored all that out into an IMetaDataHelper interface where I pass it in the entity type and property name to get the table and field back - along with caching the result so I don't have to query metadata all the time.
At the end the tsql is a batch that has all the T-SQL how we want it with the locking hints and containing the transaction level. We also change numeric fields from just being set to nfield = 10 but to be nfield = nfield + 2 in the TSQL if the user updated them by 2 to avoid the concurrency issue as well.
What you wont get to is having SQL locked once someone starts to edit your entity but I don't see how you would get that with stored procedures as well.
All in all it took me about 2 solid days to get this all up and running for us.
qid & accept id:
(23219081, 23221322)
query:
SQL code for DB query by date
soup:
If you will only have one value per tag per month, you can use a conditional aggregate to choose your record. I have gone for the MAX function, but if you only have one value it is arbitrary:
\nDECLARE @Year INT;\nSET @Year = 2013;\n\n-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE\nDECLARE @Date SMALLDATETIME;\nSET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);\n\nSELECT Tagname,\n Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),\n Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),\n Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),\n Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),\n May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),\n Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),\n Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),\n Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),\n Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),\n Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),\n Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),\n Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)\nFROM runtime.dbo.History\nWHERE Tagname IN ('Tag1', 'Tag2')\nAND wwVersion = 'Latest'\nAND DateTime >= @Date\nAND DateTime < DATEADD(YEAR, 1, @Date)\nGROUP BY TagName;\n
\nIf you will have multiple values then you will need to apply some sort of logic to chose the correct one. In the below example I have gone for the first value for each month:
\nDECLARE @Year INT;\nSET @Year = 2013;\n\n-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE\nDECLARE @Date SMALLDATETIME;\nSET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);\n\nSELECT Tagname,\n Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),\n Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),\n Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),\n Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),\n May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),\n Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),\n Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),\n Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),\n Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),\n Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),\n Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),\n Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)\nFROM ( SELECT TagName, \n DateTime,\n Value,\n RowNum = ROW_NUMBER() OVER(PARTITION BY TagName, DATEPART(MONTH, DateTime), DATEPART(YEAR, DateTime)\n ORDER BY DateTime)\n FROM runtime.dbo.History\n WHERE Tagname IN ('Tag1', 'Tag2')\n AND wwVersion = 'Latest'\n AND DateTime >= @Date\n AND DateTime < DATEADD(YEAR, 1, @Date)\n ) h\nWHERE h.RowNum = 1\nGROUP BY TagName;\n
\n
soup wrap:
If you will only have one value per tag per month, you can use a conditional aggregate to choose your record. I have gone for the MAX function, but if you only have one value it is arbitrary:
DECLARE @Year INT;
SET @Year = 2013;
-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE
DECLARE @Date SMALLDATETIME;
SET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);
SELECT Tagname,
Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),
Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),
Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),
Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),
May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),
Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),
Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),
Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),
Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),
Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),
Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),
Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)
FROM runtime.dbo.History
WHERE Tagname IN ('Tag1', 'Tag2')
AND wwVersion = 'Latest'
AND DateTime >= @Date
AND DateTime < DATEADD(YEAR, 1, @Date)
GROUP BY TagName;
If you will have multiple values then you will need to apply some sort of logic to chose the correct one. In the below example I have gone for the first value for each month:
DECLARE @Year INT;
SET @Year = 2013;
-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE
DECLARE @Date SMALLDATETIME;
SET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);
SELECT Tagname,
Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),
Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),
Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),
Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),
May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),
Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),
Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),
Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),
Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),
Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),
Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),
Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)
FROM ( SELECT TagName,
DateTime,
Value,
RowNum = ROW_NUMBER() OVER(PARTITION BY TagName, DATEPART(MONTH, DateTime), DATEPART(YEAR, DateTime)
ORDER BY DateTime)
FROM runtime.dbo.History
WHERE Tagname IN ('Tag1', 'Tag2')
AND wwVersion = 'Latest'
AND DateTime >= @Date
AND DateTime < DATEADD(YEAR, 1, @Date)
) h
WHERE h.RowNum = 1
GROUP BY TagName;
qid & accept id:
(23225721, 23226382)
query:
Remove last character in dbms_output.put_line
soup:
You can't directly - you have no control over what has been written to the buffer. So you need to not write it in the first place. One way is to keep track of where you are in the output, the list of columns in this case, and only add the comma if you are not on the last item. Using the analytic row_number() function can be used for this:
\nbegin\n for v_rec in (\n select column_name,data_type,\n row_number() over (order by column_id desc) as rn\n from user_tab_cols\n where table_name = 'RFI_ATCH_CHKLST_DTL'\n order by column_id\n ) loop\n dbms_output.put('p' || v_rec.column_name);\n if v_rec.rn != 1 then\n dbms_output.put(',');\n end if;\n dbms_output.new_line;\n end loop;\nend;\n/\n\npRACD_REMARKS,\npRACD_NA_STS,\npRACD_VAL2_STS,\npRACD_VAL_STS,\npBCLI_CODE,\npBAI_CODE,\npRAH_ID,\npRACD_ID\n
\nThe rn pseudocolumn generates a numeric row counter, in descending order in this case. This is the reverse of the order the columns actually appear in - both order by clauses use the same value, column_id, with one descending and the other ascending:
\nselect column_id, column_name,\n row_number() over (order by column_id desc) as rn\nfrom user_tab_cols\nwhere table_name = 'RFI_ATCH_CHKLST_DTL'\norder by column_id;\n\n COLUMN_ID COLUMN_NAME RN\n---------- ------------------------------ ----------\n 1 RACD_REMARKS 8 \n 2 RACD_NA_STS 7 \n 3 RACD_VAL2_STS 6 \n 4 RACD_VAL_STS 5 \n 5 BCLI_CODE 4 \n 6 BAI_CODE 3 \n 7 RAH_ID 2 \n 8 RACD_ID 1 \n
\nSo when the row counter goes down to 1, you know you're on the last row from the cursor, and you can use that knowledge to omit the comma.
\nYou don't have to use column_id but it's probably useful here. You could order by column_name, or anything you like, as long as both clauses use the same ordering logic (but in reverse).
\n
soup wrap:
You can't directly - you have no control over what has been written to the buffer. So you need to not write it in the first place. One way is to keep track of where you are in the output, the list of columns in this case, and only add the comma if you are not on the last item. Using the analytic row_number() function can be used for this:
begin
for v_rec in (
select column_name,data_type,
row_number() over (order by column_id desc) as rn
from user_tab_cols
where table_name = 'RFI_ATCH_CHKLST_DTL'
order by column_id
) loop
dbms_output.put('p' || v_rec.column_name);
if v_rec.rn != 1 then
dbms_output.put(',');
end if;
dbms_output.new_line;
end loop;
end;
/
pRACD_REMARKS,
pRACD_NA_STS,
pRACD_VAL2_STS,
pRACD_VAL_STS,
pBCLI_CODE,
pBAI_CODE,
pRAH_ID,
pRACD_ID
The rn pseudocolumn generates a numeric row counter, in descending order in this case. This is the reverse of the order the columns actually appear in - both order by clauses use the same value, column_id, with one descending and the other ascending:
select column_id, column_name,
row_number() over (order by column_id desc) as rn
from user_tab_cols
where table_name = 'RFI_ATCH_CHKLST_DTL'
order by column_id;
COLUMN_ID COLUMN_NAME RN
---------- ------------------------------ ----------
1 RACD_REMARKS 8
2 RACD_NA_STS 7
3 RACD_VAL2_STS 6
4 RACD_VAL_STS 5
5 BCLI_CODE 4
6 BAI_CODE 3
7 RAH_ID 2
8 RACD_ID 1
So when the row counter goes down to 1, you know you're on the last row from the cursor, and you can use that knowledge to omit the comma.
You don't have to use column_id but it's probably useful here. You could order by column_name, or anything you like, as long as both clauses use the same ordering logic (but in reverse).
qid & accept id:
(23303779, 23303991)
query:
update a table from another table and add new values
soup:
You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT
\nUPDATE
\nUPDATE O\nSET O.Initials = N.Initials \nFROM Original_Table O INNER JOIN New_Table N \nON O.ID = N.ID\n
\nINSERT
\nINSERT INTO Original_Table (ID , Initials)\nSELECT ID , Initials \nFROM New_Table\nWHERE NOT EXISTS ( SELECT 1 \n FROM Original_Table\n WHERE ID = Original_Table.ID)\n
\nImportant Note
\nReason why I suggested to avoid using merge statement read this article Use Caution with SQL Server's MERGE Statement by Aaron Bertrand
\n
soup wrap:
You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT
UPDATE
UPDATE O
SET O.Initials = N.Initials
FROM Original_Table O INNER JOIN New_Table N
ON O.ID = N.ID
INSERT
INSERT INTO Original_Table (ID , Initials)
SELECT ID , Initials
FROM New_Table
WHERE NOT EXISTS ( SELECT 1
FROM Original_Table
WHERE ID = Original_Table.ID)
Important Note
Reason why I suggested to avoid using merge statement read this article Use Caution with SQL Server's MERGE Statement by Aaron Bertrand
qid & accept id:
(23349694, 23349762)
query:
MySQL query to find partial duplicates
soup:
SELECT first_name,last_name,school,contest FROM table \nWHERE contest IN ('blah','mah','wah')\nGROUP BY first_name, last_name, school \nHAVING COUNT(DISTINCT contest)>1\n
\nEdit
\nSELECT * FROM table t JOIN\n(SELECT GROUP_CONCAT(id)as ids,first_name,last_name,school,contest FROM table\nWHERE contest IN (1001,1002,1003)\nGROUP BY first_name, last_name, school \nHAVING COUNT(DISTINCT contest)>1)x\nON FIND_IN_SET(t.id,x.ids)>0\n
\n\n
soup wrap:
SELECT first_name,last_name,school,contest FROM table
WHERE contest IN ('blah','mah','wah')
GROUP BY first_name, last_name, school
HAVING COUNT(DISTINCT contest)>1
Edit
SELECT * FROM table t JOIN
(SELECT GROUP_CONCAT(id)as ids,first_name,last_name,school,contest FROM table
WHERE contest IN (1001,1002,1003)
GROUP BY first_name, last_name, school
HAVING COUNT(DISTINCT contest)>1)x
ON FIND_IN_SET(t.id,x.ids)>0
qid & accept id:
(23369574, 23370310)
query:
How to replace ' or any special character in when using XMLELEMENT Oracle
soup:
You can make use of utl_i18n package and unescape_reference() function in particular. Here is an example:
\nclear screen;\ncolumn res format a7;\n\nselect utl_i18n.unescape_reference(\n rtrim(\n xmlagg( -- use of xmlagg() function in \n -- this situation seems to be unnecessary \n XMLELEMENT(E,'I''m'||':')\n ).extract('//text()'),':'\n )\n ) as res\n from dual;\n
\nResult:
\nRES \n-------\nI'm \n
\n
soup wrap:
You can make use of utl_i18n package and unescape_reference() function in particular. Here is an example:
clear screen;
column res format a7;
select utl_i18n.unescape_reference(
rtrim(
xmlagg( -- use of xmlagg() function in
-- this situation seems to be unnecessary
XMLELEMENT(E,'I''m'||':')
).extract('//text()'),':'
)
) as res
from dual;
Result:
RES
-------
I'm
qid & accept id:
(23400658, 23400704)
query:
SQL - ALL, Including all values
soup:
I think what you are after is a inner join. Not sure from your questions which way around you want your data. However this should give you a good clue how to procede and what keywords to lock for in the documentation to go further.
\nSELECT a.*\nFROM xyz a\nINNER JOIN abc b ON b.account_number = a.account_number;\n
\nEDIT:
\nSeems I misunderstood the original question.. sorry. To get what you want you can just do:
\nSELECT campaign_id\nFROM xyz \nWHERE account_number IN ('1', '2', '3', '5')\nGROUP BY campaign_id\nHAVING COUNT(DISTINCT account_number) = 4;\n
\nThis is called relational division if you want to investigate further.
\n
soup wrap:
I think what you are after is a inner join. Not sure from your questions which way around you want your data. However this should give you a good clue how to procede and what keywords to lock for in the documentation to go further.
SELECT a.*
FROM xyz a
INNER JOIN abc b ON b.account_number = a.account_number;
EDIT:
Seems I misunderstood the original question.. sorry. To get what you want you can just do:
SELECT campaign_id
FROM xyz
WHERE account_number IN ('1', '2', '3', '5')
GROUP BY campaign_id
HAVING COUNT(DISTINCT account_number) = 4;
This is called relational division if you want to investigate further.
qid & accept id:
(23433143, 23433168)
query:
How to select records that have multiple values in sql?
soup:
To return all the subscription plan IDs in one row, use GROUP_CONCAT:
\nSELECT user_id, GROUP_CONCAT(DISTINCT subscription_plan_id), MIN(created_at), MAX(created_at)\nFROM\n subscriptions\nWHERE \n created_at BETWEEN '2014-01-01' AND '2014-01-31'\nGROUP BY\n user_id\nHAVING\n COUNT(DISTINCT subscription_plan_id) > 1\n
\nTo return them in multiple rows:
\nSELECT DISTINCT user_id, subscription_plan_id, created_at\nFROM subscriptions s\nWHERE user_id IN (\n SELECT user_id\n FROM subscriptions\n WHERE \n created_at BETWEEN '2014-01-01' AND '2014-01-31'\n GROUP BY\n user_id\n HAVING\n COUNT(DISTINCT subscription_plan_id) > 1)\nAND created_at BETWEEN '2014-01-01' AND '2014-01-31'\nORDER BY user_id, created_at\n
\n
soup wrap:
To return all the subscription plan IDs in one row, use GROUP_CONCAT:
SELECT user_id, GROUP_CONCAT(DISTINCT subscription_plan_id), MIN(created_at), MAX(created_at)
FROM
subscriptions
WHERE
created_at BETWEEN '2014-01-01' AND '2014-01-31'
GROUP BY
user_id
HAVING
COUNT(DISTINCT subscription_plan_id) > 1
To return them in multiple rows:
SELECT DISTINCT user_id, subscription_plan_id, created_at
FROM subscriptions s
WHERE user_id IN (
SELECT user_id
FROM subscriptions
WHERE
created_at BETWEEN '2014-01-01' AND '2014-01-31'
GROUP BY
user_id
HAVING
COUNT(DISTINCT subscription_plan_id) > 1)
AND created_at BETWEEN '2014-01-01' AND '2014-01-31'
ORDER BY user_id, created_at
qid & accept id:
(23470309, 23474733)
query:
sql select query self join or loop through to fetch records
soup:
This is a recursive query: For all rooms go to the connecting room till you find the one that has no more connecting room (i.e. connecting room id is 0).
\nwith rooms (roomid, connectingroomid) as \n(\n select \n roomid,\n case when connectingroomid = 0 then \n roomid \n else \n connectingroomid \n end as connectingroomid\n from room\n where connectingroomid = 0\n union all\n select room.roomid, rooms.connectingroomid \n from room\n inner join rooms on room.connectingroomid = rooms.roomid\n) \nselect * from rooms\norder by connectingroomid, roomid;\n
\nHere is the SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/1.
\nEDIT: Here is the explanation. Rather than doing this in the comments I am doing it here for better readability.
\nThe WITH clause is used to create a recursion here. You see I named it rooms and inside rooms I select from rooms itself. Here is how to read it: Start with the part before UNION ALL. Then recursively do the part after UNION ALL. So, before UNION ALL I only select the records where connectingroomid is zero. In your example you show every room with its connectingroomid except for those with connectingroomid for which you show the room with itself. I use CASE here to do the same. But now that I am explaining this, I notice that connectingroomid is always zero because of the WHERE clause. So the statement can be simplified thus:
\nwith rooms (roomid, connectingroomid) as \n(\n select \n roomid,\n roomid as connectingroomid\n from room where connectingroomid = 0\n union all\n select room.roomid, rooms.connectingroomid \n from room\n inner join rooms on room.connectingroomid = rooms.roomid\n) \nselect * from rooms\norder by connectingroomid, roomid;\n
\nThe SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/2.
\nWith the part before the UNION ALL I found the two rooms without connecting room. Now the part after UNION ALL is executed for the two rooms found. It selects the rooms which connecting room was just found. And then it selects the rooms which connecting room was just found. And so on till the join returns no more rooms.
\nHope this helps understanding the query. You can look for "recursive cte" on the Internet to find more examples and explanations on the topic.
\n
soup wrap:
This is a recursive query: For all rooms go to the connecting room till you find the one that has no more connecting room (i.e. connecting room id is 0).
with rooms (roomid, connectingroomid) as
(
select
roomid,
case when connectingroomid = 0 then
roomid
else
connectingroomid
end as connectingroomid
from room
where connectingroomid = 0
union all
select room.roomid, rooms.connectingroomid
from room
inner join rooms on room.connectingroomid = rooms.roomid
)
select * from rooms
order by connectingroomid, roomid;
Here is the SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/1.
EDIT: Here is the explanation. Rather than doing this in the comments I am doing it here for better readability.
The WITH clause is used to create a recursion here. You see I named it rooms and inside rooms I select from rooms itself. Here is how to read it: Start with the part before UNION ALL. Then recursively do the part after UNION ALL. So, before UNION ALL I only select the records where connectingroomid is zero. In your example you show every room with its connectingroomid except for those with connectingroomid for which you show the room with itself. I use CASE here to do the same. But now that I am explaining this, I notice that connectingroomid is always zero because of the WHERE clause. So the statement can be simplified thus:
with rooms (roomid, connectingroomid) as
(
select
roomid,
roomid as connectingroomid
from room where connectingroomid = 0
union all
select room.roomid, rooms.connectingroomid
from room
inner join rooms on room.connectingroomid = rooms.roomid
)
select * from rooms
order by connectingroomid, roomid;
The SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/2.
With the part before the UNION ALL I found the two rooms without connecting room. Now the part after UNION ALL is executed for the two rooms found. It selects the rooms which connecting room was just found. And then it selects the rooms which connecting room was just found. And so on till the join returns no more rooms.
Hope this helps understanding the query. You can look for "recursive cte" on the Internet to find more examples and explanations on the topic.
qid & accept id:
(23478919, 23479108)
query:
Referencing table in another database
soup:
Yes, the kind of reference you describe is called a table synonym in SQL Server.
\nUSE DBS\nGO\n\nCREATE SYNONYM [dbo].[secondaryTableReference] FOR [DBS].[dbo].[secondaryTable]\nGO\n
\nThen you may query it as though it is a table in your primary database.
\nSELECT * FROM [dbo].[secondaryTableReference]\n
\n
soup wrap:
Yes, the kind of reference you describe is called a table synonym in SQL Server.
USE DBS
GO
CREATE SYNONYM [dbo].[secondaryTableReference] FOR [DBS].[dbo].[secondaryTable]
GO
Then you may query it as though it is a table in your primary database.
SELECT * FROM [dbo].[secondaryTableReference]
qid & accept id:
(23507472, 23507574)
query:
SUM of columns and displaying multiple queries
soup:
Try using:
\n$result = mysql_query("SELECT productLine, SUM(buyPrice) AS sum_buy_price, SUM(MSRP) AS sum_msrp FROM myTable group by productLine"); // selecting data through mysql_query()\n
\nand to output the results:
\necho "";\nwhile($row = mysql_fetch_array($result))\n{\n // we are running a while loop to print all the rows in a table\n echo ""; \n echo "" . $row['productLine'] . " "; \n echo "" . $row['sum_buy_price'] . " "; \n echo "" . $row['sum_msrp'] . " "; \n echo " "; \n}\necho "
";\n
\n
soup wrap:
Try using:
$result = mysql_query("SELECT productLine, SUM(buyPrice) AS sum_buy_price, SUM(MSRP) AS sum_msrp FROM myTable group by productLine"); // selecting data through mysql_query()
and to output the results:
echo "";
while($row = mysql_fetch_array($result))
{
// we are running a while loop to print all the rows in a table
echo "";
echo "" . $row['productLine'] . " ";
echo "" . $row['sum_buy_price'] . " ";
echo "" . $row['sum_msrp'] . " ";
echo " ";
}
echo "
";
qid & accept id:
(23527871, 23531944)
query:
change SQL column from Float to Decimal Type
soup:
You can simply update the Rate data and then change the column data type.
\nFirst, you can verify the CAST by using the following query (for only rows that have the decimal part < 0.000001)
\nSELECT \n [Rate],\n CAST([Rate] as decimal(28, 6)) Rate_decimal\nFROM [dbo].[TES_Tracks]\nWHERE [Rate] - FLOOR([Rate]) < 0.000001;\n
\nOnce you have verified that the CAST expression is correct, then you can apply it using an UPDATE statement. Again, you can update only those rows which have [Rate] - FLOOR([Rate]), thus getting good performance.
\nUPDATE [dbo].[TES_Tracks]\nSET [Rate] = CAST([Rate] as decimal(28, 6))\nWHERE [Rate] - FLOOR([Rate]) < 0.000001;\n\nALTER TABLE [dbo].[TES_Tracks] ALTER COLUMN [Rate] DECIMAL(28,6);\n
\nThis way, you would not need to drop the Rate column.
\n\n
soup wrap:
You can simply update the Rate data and then change the column data type.
First, you can verify the CAST by using the following query (for only rows that have the decimal part < 0.000001)
SELECT
[Rate],
CAST([Rate] as decimal(28, 6)) Rate_decimal
FROM [dbo].[TES_Tracks]
WHERE [Rate] - FLOOR([Rate]) < 0.000001;
Once you have verified that the CAST expression is correct, then you can apply it using an UPDATE statement. Again, you can update only those rows which have [Rate] - FLOOR([Rate]), thus getting good performance.
UPDATE [dbo].[TES_Tracks]
SET [Rate] = CAST([Rate] as decimal(28, 6))
WHERE [Rate] - FLOOR([Rate]) < 0.000001;
ALTER TABLE [dbo].[TES_Tracks] ALTER COLUMN [Rate] DECIMAL(28,6);
This way, you would not need to drop the Rate column.
qid & accept id:
(23552848, 23552906)
query:
Can I add aggregated column without performing a join?
soup:
Depending on what your function is, you can use window functions (sometimes called analytic functions). For instance, if you wanted the maximum value of b for a given a:
\nselect a, b, c, max(b) over (partition by a) as d\nfrom table1;\n
\nWithout more information, it is hard to be more specific.
\nEDIT:
\nYou should be able to do this with analytic functions:
\nselect count , avg, variance,\n (sum(count * avg) over (partition by b) /\n sum(count) over (partition by b)\n ) as weighted_average\nfrom view_1;\n
\n
soup wrap:
Depending on what your function is, you can use window functions (sometimes called analytic functions). For instance, if you wanted the maximum value of b for a given a:
select a, b, c, max(b) over (partition by a) as d
from table1;
Without more information, it is hard to be more specific.
EDIT:
You should be able to do this with analytic functions:
select count , avg, variance,
(sum(count * avg) over (partition by b) /
sum(count) over (partition by b)
) as weighted_average
from view_1;
qid & accept id:
(23594298, 23594347)
query:
Select all data which is associated in and combination
soup:
You can do so
\nSELECT * FROM documents d\nRIGHT JOIN doc_labels dl\nON(d.id = dl.doc_id)\nWHERE dl.label_id IN(1,2)\nGROUP BY d.id\nHAVING COUNT(DISTINCT dl.label_id) >= 2 /*this will give you the documents that must have lable 1,2 and can have more lables*/\n
\nOr if you need the documents with only label 1 and 2 then change
\nHAVING COUNT(DISTINCT dl.label_id) = 2\n
\n
soup wrap:
You can do so
SELECT * FROM documents d
RIGHT JOIN doc_labels dl
ON(d.id = dl.doc_id)
WHERE dl.label_id IN(1,2)
GROUP BY d.id
HAVING COUNT(DISTINCT dl.label_id) >= 2 /*this will give you the documents that must have lable 1,2 and can have more lables*/
Or if you need the documents with only label 1 and 2 then change
HAVING COUNT(DISTINCT dl.label_id) = 2
qid & accept id:
(23608624, 23608729)
query:
select rows mysql where the value of the left join is different
soup:
You can do so
\nselect *\nfrom messages m\nleft join deleted_messages d on d.message_id = m.id\nwhere \n d.message_id IS NULL\nAND m.user_id = 1\n
\nThis will give all the messages from user 1 which are not deleted
\nDemo
\nOther way to use NOT EXISTS
\nselect *\nfrom messages m\nwhere not exists\n(select 1 from deleted_messages d where d.message_id = m.id)\nAND m.user_id = 1\n
\nDemo
\nFor performance factor you can find the details here\nLEFT JOIN / IS NULL vs. NOT IN vs. NOT EXISTS: nullable columns
\n
soup wrap:
You can do so
select *
from messages m
left join deleted_messages d on d.message_id = m.id
where
d.message_id IS NULL
AND m.user_id = 1
This will give all the messages from user 1 which are not deleted
Demo
Other way to use NOT EXISTS
select *
from messages m
where not exists
(select 1 from deleted_messages d where d.message_id = m.id)
AND m.user_id = 1
Demo
For performance factor you can find the details here
LEFT JOIN / IS NULL vs. NOT IN vs. NOT EXISTS: nullable columns
qid & accept id:
(23626176, 23626513)
query:
Combining data between three tables
soup:
What you can do to achieve this is using joins : Here is some MySQL doc about this
\nBut here, using 2 tables for single partners and popularity is not really needed, since one line of single_partners is strictly equal to one line of partner_popularty, you can put them in the same table. You should put them in the same table, and using a default of zero if the partner has no popularity registered, so it'll show last when sorting by popularity.
\nSo, then you'll have 2 tables :
\nTable 1 - partners
\n| partner_id | name | type | logo |\n
\nTable 2 - single_partners
\n| id | partner_id | address | zipcode | city | pop_men | pop_women | pop_family\n
\nNow your query to select all that becomes extremely simple (just select the partners, filter the city, order them and you're done), and with a little grouping and a join, you can also select partners sorted by popularity summarized in all cities :
\nSELECT p.*,\n SUM(pop_men) AS total_pop_men,\n SUM(pop_women) AS total_pop_women,\n SUM(pop_family) AS total_pop_family\nFROM partners p\nJOIN single_partners sp ON sp.partner_id = p.partner_id\nGROUP BY partner_id\nORDER BY total_pop_men DESC,\n total_pop_women DESC,\n total_pop_family DESC\n
\n
soup wrap:
What you can do to achieve this is using joins : Here is some MySQL doc about this
But here, using 2 tables for single partners and popularity is not really needed, since one line of single_partners is strictly equal to one line of partner_popularty, you can put them in the same table. You should put them in the same table, and using a default of zero if the partner has no popularity registered, so it'll show last when sorting by popularity.
So, then you'll have 2 tables :
Table 1 - partners
| partner_id | name | type | logo |
Table 2 - single_partners
| id | partner_id | address | zipcode | city | pop_men | pop_women | pop_family
Now your query to select all that becomes extremely simple (just select the partners, filter the city, order them and you're done), and with a little grouping and a join, you can also select partners sorted by popularity summarized in all cities :
SELECT p.*,
SUM(pop_men) AS total_pop_men,
SUM(pop_women) AS total_pop_women,
SUM(pop_family) AS total_pop_family
FROM partners p
JOIN single_partners sp ON sp.partner_id = p.partner_id
GROUP BY partner_id
ORDER BY total_pop_men DESC,
total_pop_women DESC,
total_pop_family DESC
qid & accept id:
(23642201, 23642292)
query:
How do I write paging/limits into a SQL query for 2008 R2?
soup:
Since you're using Server 2008, you can use this excellent example from that link. (formatted to be more readable):
\nDECLARE @RowsPerPage INT = 10\nDECLARE @PageNumber INT = 6\n\nSELECT SalesOrderDetailID\n ,SalesOrderID\n ,ProductID\nFROM (\n SELECT SalesOrderDetailID\n ,SalesOrderID\n ,ProductID\n ,ROW_NUMBER() OVER (\n ORDER BY SalesOrderDetailID\n ) AS RowNum\n FROM Sales.SalesOrderDetail\n ) AS SOD\nWHERE SOD.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1\n AND @RowsPerPage * (@PageNumber)\n
\nThis will return the sixth page, of ten records on each page. ROW_NUMBER() basically assigns a temporary Identity column for this query, ordered by SalesOrderDetailID.
\nYou can then select records where row number is between 61-70, for that sixth page.
\nHope that makes sense
\n
\nWorking from your added attempt:
\nDECLARE @RowsPerPage INT = 10\nDECLARE @PageNumber INT = 6\n\nSELECT *\nFROM (\n SELECT t1.*\n ,t3.[timestamp]\n ,t3.comments\n ,ROW_NUMBER() OVER (\n ORDER BY t1.id\n ) AS RowNum\n FROM crm_main t1\n INNER JOIN crm_group_relationships t2 ON t1.id = t2.customerid\n OUTER APPLY (\n SELECT TOP 1 t3.[timestamp]\n ,t3.customerid\n ,t3.comments\n FROM crm_comments t3\n WHERE t1.id = t3.customerid\n ORDER BY t3.TIMESTAMP ASC\n ) t3\n WHERE t1.dealerid = '9999'\n AND t2.groupid = '251'\n ) AS x\nWHERE x.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1\n AND @RowsPerPage * (@PageNumber)\n
\n
soup wrap:
Since you're using Server 2008, you can use this excellent example from that link. (formatted to be more readable):
DECLARE @RowsPerPage INT = 10
DECLARE @PageNumber INT = 6
SELECT SalesOrderDetailID
,SalesOrderID
,ProductID
FROM (
SELECT SalesOrderDetailID
,SalesOrderID
,ProductID
,ROW_NUMBER() OVER (
ORDER BY SalesOrderDetailID
) AS RowNum
FROM Sales.SalesOrderDetail
) AS SOD
WHERE SOD.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1
AND @RowsPerPage * (@PageNumber)
This will return the sixth page, of ten records on each page. ROW_NUMBER() basically assigns a temporary Identity column for this query, ordered by SalesOrderDetailID.
You can then select records where row number is between 61-70, for that sixth page.
Hope that makes sense
Working from your added attempt:
DECLARE @RowsPerPage INT = 10
DECLARE @PageNumber INT = 6
SELECT *
FROM (
SELECT t1.*
,t3.[timestamp]
,t3.comments
,ROW_NUMBER() OVER (
ORDER BY t1.id
) AS RowNum
FROM crm_main t1
INNER JOIN crm_group_relationships t2 ON t1.id = t2.customerid
OUTER APPLY (
SELECT TOP 1 t3.[timestamp]
,t3.customerid
,t3.comments
FROM crm_comments t3
WHERE t1.id = t3.customerid
ORDER BY t3.TIMESTAMP ASC
) t3
WHERE t1.dealerid = '9999'
AND t2.groupid = '251'
) AS x
WHERE x.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1
AND @RowsPerPage * (@PageNumber)
qid & accept id:
(23676371, 23680644)
query:
Performance monitoring for standalone .NET desktop application with New Relic
soup:
I work for New Relic.
\nIt is possible to monitor the performance of non-IIS applications as long as they meet these requirements:
\n\nThe Instrument All .NET Applications feature must be enabled
\nApp.config and/or newrelic.config will need to be configured for the .exe
\n
\nYou can read more about these requirements on our documentation site here:\nhttps://docs.newrelic.com/docs/dotnet/instrumenting-custom-applications
\nYou may need gather custom metrics by using our .NET agent API. The methods RecordMetric, RecordResponseTimeMetric, and IncrementCounter specifically work with non-web applications.\nOur .NET agent API documentation is located here: https://docs.newrelic.com/docs/dotnet/net-agent-api
\nYou can also set up custom transactions to trace non-web transactions. We can normally trace functions that use HttpObjects, but the following is a new feature implemented in agent version 2.24.218.0.\nIn the cases of non-web apps and async calls where there is no transaction context the following feature can be used to create transactions where the agent would normally not do so. This is a manual process via a custom instrumentation file.
\nCreate a custom instrumentation file named, say CustomInstrumentation.xml, in C:\ProgramData\New Relic.NET Agent\Extensions along side CoreInstrumentation.xml. Add the following content to your custom instrumentation file:
\n\n\n \n \n \n \n \n \n \n \n
\nYou must change the attribute values Category/Name, AssemblyName, NameSpace.ClassName, and MethodName above:
\nThe transaction starts when an object of type NameSpace.ClassName from assembly AssemblyName invokes the method MethodName. The transaction ends when the method returns or throws an exception. The transaction will be named Name and will be grouped into the transaction type specified by Category. In the New Relic UI you can select the transaction type from the Type drop down menu when viewing the Monitoring > Transactions page.
\nNote that both Category and Name must be present and must be separated by a slash.
\nAs you would expect, instrumented activity (methods, database, externals) occurring during the method's invocation will be shown in the transaction's breakdown table and in transaction traces.
\nHere is a more concrete example. First, the instrumentation file:
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\nNow some code:
\nvar foo = new Foo();\nfoo.Bar1(); // Creates a transaction named Bars in category Background\nfoo.Bar2(); // Same here.\nfoo.Bar3(); // Won't create a new transaction. See notes below.\n\npublic class Foo\n{\n // this will result in a transaction with an External Service request segment in the transaction trace\n public void Bar1()\n {\n new WebClient().DownloadString("http://www.google.com/);\n }\n\n // this will result in a transaction that has one segment with a category of "Custom" and a name of "some custom metric name"\n public void Bar2()\n {\n // the segment for Bar3 will contain your SQL query inside of it and possibly an execution plan\n Bar3();\n }\n\n // if Bar3 is called directly, it won't get a transaction made for it.\n // However, if it is called inside of Bar1 or Bar2 then it will show up as a segment containing the SQL query\n private void Bar3()\n {\n using (var connection = new SqlConnection(ConnectionStrings["MsSqlConnection"].ConnectionString))\n {\n connection.Open();\n using (var command = new SqlCommand("SELECT * FROM table", connection))\n using (var reader = command.ExecuteReader())\n {\n reader.Read();\n }\n }\n }\n}\n
\nHere is a simple console app that demonstrates Custom Transactions:
\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\nnamespace ConsoleApplication1\n{\n class Program\n {\n static void Main(string[] args)\n {\n Console.WriteLine("Custom Transactions");\n var t = new CustomTransaction();\n for (int i = 0; i < 100; ++i )\n t.StartTransaction();\n }\n }\n class CustomTransaction\n {\n public void StartTransaction()\n {\n Console.WriteLine("StartTransaction"); \n Dummy();\n }\n void Dummy()\n {\n System.Threading.Thread.Sleep(5000);\n }\n }\n\n}\n
\nUse the following custom instrumentation file:
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n
soup wrap:
I work for New Relic.
It is possible to monitor the performance of non-IIS applications as long as they meet these requirements:
The Instrument All .NET Applications feature must be enabled
App.config and/or newrelic.config will need to be configured for the .exe
You can read more about these requirements on our documentation site here:
https://docs.newrelic.com/docs/dotnet/instrumenting-custom-applications
You may need gather custom metrics by using our .NET agent API. The methods RecordMetric, RecordResponseTimeMetric, and IncrementCounter specifically work with non-web applications.
Our .NET agent API documentation is located here: https://docs.newrelic.com/docs/dotnet/net-agent-api
You can also set up custom transactions to trace non-web transactions. We can normally trace functions that use HttpObjects, but the following is a new feature implemented in agent version 2.24.218.0.
In the cases of non-web apps and async calls where there is no transaction context the following feature can be used to create transactions where the agent would normally not do so. This is a manual process via a custom instrumentation file.
Create a custom instrumentation file named, say CustomInstrumentation.xml, in C:\ProgramData\New Relic.NET Agent\Extensions along side CoreInstrumentation.xml. Add the following content to your custom instrumentation file:
You must change the attribute values Category/Name, AssemblyName, NameSpace.ClassName, and MethodName above:
The transaction starts when an object of type NameSpace.ClassName from assembly AssemblyName invokes the method MethodName. The transaction ends when the method returns or throws an exception. The transaction will be named Name and will be grouped into the transaction type specified by Category. In the New Relic UI you can select the transaction type from the Type drop down menu when viewing the Monitoring > Transactions page.
Note that both Category and Name must be present and must be separated by a slash.
As you would expect, instrumented activity (methods, database, externals) occurring during the method's invocation will be shown in the transaction's breakdown table and in transaction traces.
Here is a more concrete example. First, the instrumentation file:
Now some code:
var foo = new Foo();
foo.Bar1(); // Creates a transaction named Bars in category Background
foo.Bar2(); // Same here.
foo.Bar3(); // Won't create a new transaction. See notes below.
public class Foo
{
// this will result in a transaction with an External Service request segment in the transaction trace
public void Bar1()
{
new WebClient().DownloadString("http://www.google.com/);
}
// this will result in a transaction that has one segment with a category of "Custom" and a name of "some custom metric name"
public void Bar2()
{
// the segment for Bar3 will contain your SQL query inside of it and possibly an execution plan
Bar3();
}
// if Bar3 is called directly, it won't get a transaction made for it.
// However, if it is called inside of Bar1 or Bar2 then it will show up as a segment containing the SQL query
private void Bar3()
{
using (var connection = new SqlConnection(ConnectionStrings["MsSqlConnection"].ConnectionString))
{
connection.Open();
using (var command = new SqlCommand("SELECT * FROM table", connection))
using (var reader = command.ExecuteReader())
{
reader.Read();
}
}
}
}
Here is a simple console app that demonstrates Custom Transactions:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Custom Transactions");
var t = new CustomTransaction();
for (int i = 0; i < 100; ++i )
t.StartTransaction();
}
}
class CustomTransaction
{
public void StartTransaction()
{
Console.WriteLine("StartTransaction");
Dummy();
}
void Dummy()
{
System.Threading.Thread.Sleep(5000);
}
}
}
Use the following custom instrumentation file:
qid & accept id:
(23705421, 23710941)
query:
Get the rest of the row in a max group by
soup:
I would think this would solve your problem:
\nSELECT who.employee_id, course.course_id,\n MAX(add_months(sess.end_date, vers.valid_for_months))\n
\nThat gets the latest end date. If you want the end date for the last session, use row_number():
\nSELECT employee_id, course_id, end_date\nFROM (SELECT who.employee_id, course.course_id, sess.end_date,\n row_number() over (partition by who.employee_id, course.course_id\n order by sess.end_date\n ) as seqnum\n FROM employee_session_join esj\n JOIN training_session sess on sess.session_id = esj.session_id\n JOIN course_version vers on vers.version_id = sess.version_id\n JOIN course course on course.course_id = vers.course_id\n JOIN employee who on who.employee_id = esj.employee_id\n WHERE esj.active_flag = 'Y'\n AND sess.active_flag = 'Y'\n AND course.active_flag = 'Y'\n AND who.active_flag = 'Y'\n AND esj.approval_status = 5 -- successfully passed\n) e\nWHERE seqnum = 1;\n
\n
soup wrap:
I would think this would solve your problem:
SELECT who.employee_id, course.course_id,
MAX(add_months(sess.end_date, vers.valid_for_months))
That gets the latest end date. If you want the end date for the last session, use row_number():
SELECT employee_id, course_id, end_date
FROM (SELECT who.employee_id, course.course_id, sess.end_date,
row_number() over (partition by who.employee_id, course.course_id
order by sess.end_date
) as seqnum
FROM employee_session_join esj
JOIN training_session sess on sess.session_id = esj.session_id
JOIN course_version vers on vers.version_id = sess.version_id
JOIN course course on course.course_id = vers.course_id
JOIN employee who on who.employee_id = esj.employee_id
WHERE esj.active_flag = 'Y'
AND sess.active_flag = 'Y'
AND course.active_flag = 'Y'
AND who.active_flag = 'Y'
AND esj.approval_status = 5 -- successfully passed
) e
WHERE seqnum = 1;
qid & accept id:
(23768482, 23779481)
query:
SSIS Converting Percent to Decimal
soup:
Hope this is what you are looking for
\nExcel sheet like this is the source.
\n
\nI just tested it in my system.It is working fine. This is what I did.
\n\n- Created an SSIS package with just 1 DFT.
\n- Data flow is given below. Please note that the value which appeared as 40% in Excel sheet is visible as 0.40. So I added two derived columns. One converting as such and the next which multiplies with 100.
\n
\n
\nthe derived column structure is shown below.
\n
\nThe destination table structure be
\nCreate table Destination\n(\nid int,\nname varchar(15),\nhike decimal(8,2)\n)\n
\nI am getting the result as expected.
\nSelect * from Destination\n
\n
\n
soup wrap:
Hope this is what you are looking for
Excel sheet like this is the source.

I just tested it in my system.It is working fine. This is what I did.
- Created an SSIS package with just 1 DFT.
- Data flow is given below. Please note that the value which appeared as 40% in Excel sheet is visible as 0.40. So I added two derived columns. One converting as such and the next which multiplies with 100.

the derived column structure is shown below.

The destination table structure be
Create table Destination
(
id int,
name varchar(15),
hike decimal(8,2)
)
I am getting the result as expected.
Select * from Destination

qid & accept id:
(23803359, 23803584)
query:
SQL selecting a column, SUM and ORDER BY using three tables
soup:
Sub query to get the latest price date, and join to prices:-
\nSELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - sales.qtySold \nFROM stocks\nINNER JOIN\n(\n SELECT id, size, MAX(priceDT) AS MaxPriceDate\n FROM prices\n GROP BY id, size\n) Sub1\nON stocks.id = Sub1.id AND stocks.size = Sub1.size\nINNER JOIN prices\nON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT\nINNER JOIN sales\nON stocks.id = sales.id AND stocks.size = sales.size\nGROUP BY stocks.id, stocks.size\n
\nMy concern is that sales has multiple rows for each id / size
\nEDIT - to cope with multiple rows on sales for an id / size using an additional subquery:-
\nSELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - Sub2.tot_qtySold \nFROM stocks\nINNER JOIN\n(\n SELECT id, size, MAX(priceDT) AS MaxPriceDate\n FROM prices\n GROUP BY id, size\n) Sub1\nON stocks.id = Sub1.id AND stocks.size = Sub1.size\nINNER JOIN prices\nON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT\nINNER JOIN\n(\n SELECT id, size, SUM(qtySold) AS tot_qtySold\n FROM sales\n GROUP BY id, size\n) Sub2\nON stocks.id = Sub2.id AND stocks.size = Sub2.size\nGROUP BY stocks.id, stocks.size\n
\nON sqlfiddle:-
\nhttp://www.sqlfiddle.com/#!2/f7d37/2
\nEDIT - in answer to a question posted in the comment:-
\nThe reason for this is that there are 2 matching records on the stocks table.
\nSo for brandid 100 and size of 90 there are these 2 records from stocks:-
\nbrandId size qtyArr\n(100 , 90 , 10),\n(100 , 90 , 100),\n
\nand this one from sales:-
\nbrandId size qtySold\n(100, 90, 35),\n
\nSo MySQL will build up table initially containing a set of 2 rows. The first row will contain the first row from stocks and the only matching row from sales. The 2nd row will have the 2nd row from stocks and (again the matching row from sales).
\nbrandId size qtyArr brandId size qtySold\n(100, 90, 10, 100, 90, 35),\n(100, 90, 100, 100, 90, 35),\n
\nIt then performs the SUM of qtySold, but the quantities are counted twice (ie, once for each match records on stocks).
\nTo get around this will likely need a sub query to get the total qtysold for each brand / size, then join the results of that sub query against the stocks table
\nSELECT SUM(s.qtyArr), SUM(l.qtySold) \nFROM stocks s \nINNER join \n(\n SELECT brandId, size, sum(l.qtySold)\n FROM sales\n GROUP BY brandId, size\n) l \nON l.brandId = s.brandId\nAND l.size = s.size\nWHERE s.brandId='100' AND s.size='90';\n
\n
soup wrap:
Sub query to get the latest price date, and join to prices:-
SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - sales.qtySold
FROM stocks
INNER JOIN
(
SELECT id, size, MAX(priceDT) AS MaxPriceDate
FROM prices
GROP BY id, size
) Sub1
ON stocks.id = Sub1.id AND stocks.size = Sub1.size
INNER JOIN prices
ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT
INNER JOIN sales
ON stocks.id = sales.id AND stocks.size = sales.size
GROUP BY stocks.id, stocks.size
My concern is that sales has multiple rows for each id / size
EDIT - to cope with multiple rows on sales for an id / size using an additional subquery:-
SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - Sub2.tot_qtySold
FROM stocks
INNER JOIN
(
SELECT id, size, MAX(priceDT) AS MaxPriceDate
FROM prices
GROUP BY id, size
) Sub1
ON stocks.id = Sub1.id AND stocks.size = Sub1.size
INNER JOIN prices
ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT
INNER JOIN
(
SELECT id, size, SUM(qtySold) AS tot_qtySold
FROM sales
GROUP BY id, size
) Sub2
ON stocks.id = Sub2.id AND stocks.size = Sub2.size
GROUP BY stocks.id, stocks.size
ON sqlfiddle:-
http://www.sqlfiddle.com/#!2/f7d37/2
EDIT - in answer to a question posted in the comment:-
The reason for this is that there are 2 matching records on the stocks table.
So for brandid 100 and size of 90 there are these 2 records from stocks:-
brandId size qtyArr
(100 , 90 , 10),
(100 , 90 , 100),
and this one from sales:-
brandId size qtySold
(100, 90, 35),
So MySQL will build up table initially containing a set of 2 rows. The first row will contain the first row from stocks and the only matching row from sales. The 2nd row will have the 2nd row from stocks and (again the matching row from sales).
brandId size qtyArr brandId size qtySold
(100, 90, 10, 100, 90, 35),
(100, 90, 100, 100, 90, 35),
It then performs the SUM of qtySold, but the quantities are counted twice (ie, once for each match records on stocks).
To get around this will likely need a sub query to get the total qtysold for each brand / size, then join the results of that sub query against the stocks table
SELECT SUM(s.qtyArr), SUM(l.qtySold)
FROM stocks s
INNER join
(
SELECT brandId, size, sum(l.qtySold)
FROM sales
GROUP BY brandId, size
) l
ON l.brandId = s.brandId
AND l.size = s.size
WHERE s.brandId='100' AND s.size='90';
qid & accept id:
(23807485, 23808051)
query:
How to nest multiple MAX (...) statements in one CASE WHEN Query
soup:
In the first example, you use your MAX function to turn a single article_code column into two different columns (has9 and has8). In your second example, you are no longer splitting up your article_code column into multiple columns, therefore, as far as I can tell, you no longer need your MAX function.
\nHave you tried something along the following lines?
\nSELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 'has9'\n ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 'has8'\n ELSE 'FIX'\n END as test_version\nFROM xxxx\n
\nEDIT: Ah, in that case you will still need the MAX function to reduce it to a single line.
\nYou should be able to use your original query as a subquery that gets a single line and then use a CASE WHEN to convert it to a single string:
\nSELECT CASE WHEN has9 = 1 THEN 'has9'\n WHEN has8 = 1 THEN 'has8'\n ELSE 'FIX'\n END as test_version\nFROM ( SELECT MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 1 ELSE 0 END) AS has9,\n MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('8') THEN 1 ELSE 0 END) AS has8\n FROM xxxx )\n
\nOr, you could use my earlier query as subquery and use the MAX function to reduce it to a single line:
\nSELECT CASE WHEN MAX(result_rank) = 3 THEN 'has9'\n WHEN MAX(result_rank) = 2 THEN 'has8'\n ELSE 'FIX'\n END as test_version\nFROM ( SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 3\n ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 2\n ELSE 1\n END as result_rank\n FROM xxxx )\n
\n
soup wrap:
In the first example, you use your MAX function to turn a single article_code column into two different columns (has9 and has8). In your second example, you are no longer splitting up your article_code column into multiple columns, therefore, as far as I can tell, you no longer need your MAX function.
Have you tried something along the following lines?
SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 'has9'
ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 'has8'
ELSE 'FIX'
END as test_version
FROM xxxx
EDIT: Ah, in that case you will still need the MAX function to reduce it to a single line.
You should be able to use your original query as a subquery that gets a single line and then use a CASE WHEN to convert it to a single string:
SELECT CASE WHEN has9 = 1 THEN 'has9'
WHEN has8 = 1 THEN 'has8'
ELSE 'FIX'
END as test_version
FROM ( SELECT MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 1 ELSE 0 END) AS has9,
MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('8') THEN 1 ELSE 0 END) AS has8
FROM xxxx )
Or, you could use my earlier query as subquery and use the MAX function to reduce it to a single line:
SELECT CASE WHEN MAX(result_rank) = 3 THEN 'has9'
WHEN MAX(result_rank) = 2 THEN 'has8'
ELSE 'FIX'
END as test_version
FROM ( SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 3
ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 2
ELSE 1
END as result_rank
FROM xxxx )
qid & accept id:
(23821632, 23822213)
query:
How to segment a sequence of event by signal in SQL?
soup:
This is written in SQL Server syntax (for the table variable for the sample data) but it's fairly standard SQL and by looking at the query reference, I think it should run in BigQuery (once adapted to your actual table):
\ndeclare @t table ([order] int, event char(1))\ninsert into @t([order],event) values\n(1,'C'), (2,'C'), (3,'C'), (4,'S'), (5,'C'),\n(6,'S'), (7,'C'), (8,'C'), (9,'S')\n\nselect\n t.*,\n s1.rn\nfrom @t t\n inner join\n(\nselect\n *,\n ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n @t\nwhere\n event='S'\n) s1\n on\n t.[order] <= s1.[order]\n left join\n(\nselect\n *,\n ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n @t\nwhere\n event='S'\n) s2\n on\n t.[order] <= s2.[order] and\n s2.[order] < s1.[order]\nwhere\n s2.[order] is null\n
\nI would have normally used a Common Table Expression (CTE) rather than duplicating the subquery for the S values, but I couldn't see whether that was supported.
\nThe logic should be fairly straightforward to see - we number the S rows using a simple ROW_NUMBER() function, and then we match every row from the original table to the S row which most immediately succeeds it.
\n
\nCTE variant (but, like I said, I couldn't see support for CTEs in the documentation):
\ndeclare @t table ([order] int, event char(1))\ninsert into @t([order],event) values\n(1,'C'), (2,'C'), (3,'C'), (4,'S'), (5,'C'),\n(6,'S'), (7,'C'), (8,'C'), (9,'S')\n\n;With Numbered as (\n select\n *,\n ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n @t\nwhere\n event='S'\n)\nselect\n t.*,\n s1.rn\nfrom @t t\n inner join\nNumbered s1\n on\n t.[order] <= s1.[order]\n left join\nNumbered s2\n on\n t.[order] <= s2.[order] and\n s2.[order] < s1.[order]\nwhere\n s2.[order] is null\n
\n
soup wrap:
This is written in SQL Server syntax (for the table variable for the sample data) but it's fairly standard SQL and by looking at the query reference, I think it should run in BigQuery (once adapted to your actual table):
declare @t table ([order] int, event char(1))
insert into @t([order],event) values
(1,'C'), (2,'C'), (3,'C'), (4,'S'), (5,'C'),
(6,'S'), (7,'C'), (8,'C'), (9,'S')
select
t.*,
s1.rn
from @t t
inner join
(
select
*,
ROW_NUMBER() OVER (ORDER BY [order]) as rn
from
@t
where
event='S'
) s1
on
t.[order] <= s1.[order]
left join
(
select
*,
ROW_NUMBER() OVER (ORDER BY [order]) as rn
from
@t
where
event='S'
) s2
on
t.[order] <= s2.[order] and
s2.[order] < s1.[order]
where
s2.[order] is null
I would have normally used a Common Table Expression (CTE) rather than duplicating the subquery for the S values, but I couldn't see whether that was supported.
The logic should be fairly straightforward to see - we number the S rows using a simple ROW_NUMBER() function, and then we match every row from the original table to the S row which most immediately succeeds it.
CTE variant (but, like I said, I couldn't see support for CTEs in the documentation):
declare @t table ([order] int, event char(1))
insert into @t([order],event) values
(1,'C'), (2,'C'), (3,'C'), (4,'S'), (5,'C'),
(6,'S'), (7,'C'), (8,'C'), (9,'S')
;With Numbered as (
select
*,
ROW_NUMBER() OVER (ORDER BY [order]) as rn
from
@t
where
event='S'
)
select
t.*,
s1.rn
from @t t
inner join
Numbered s1
on
t.[order] <= s1.[order]
left join
Numbered s2
on
t.[order] <= s2.[order] and
s2.[order] < s1.[order]
where
s2.[order] is null
qid & accept id:
(23828906, 23829371)
query:
Getting Month and Day from a date
soup:
SELECT CONVERT(CHAR(5), GETDATE(), 10)\n
\nResult:
\n05-23\n
\n
soup wrap:
SELECT CONVERT(CHAR(5), GETDATE(), 10)
Result:
05-23
qid & accept id:
(23892604, 23892744)
query:
Compare two MySQL tables and remove rows that no longer exist
soup:
If you are using SQL to merge, a simple SQL can do the delete as well:
\ndelete from database_production.table\nwhere pk not in (select pk from database_temporary.table)\n
\nNotes:
\n\n- This assumes that there is a a row can be uniquely identified. This may be based on a single column, multiple columns or another mechanism.
\n- If your dataset is large, a
not exists mey perform better than not in. See What's the difference between NOT EXISTS vs. NOT IN vs. LEFT JOIN WHERE IS NULL? and NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL: SQL Server \n
\nAn example not exists:
\ndelete from database_production.table p\nwhere not exists (select 1 from database_temporary.table t where t.pk = p.pk)\n
\nPerformance Notes:
\nAs pointed out by @mgonzalez in the comments on the question, you may want to use a timestamp column (something like last modified) for comparing/merging in general so that you vompare only changed rows. This does not apply to the delete specifically, you cannot use timestamp for the delete because, well, the row would not exist.
\n
soup wrap:
If you are using SQL to merge, a simple SQL can do the delete as well:
delete from database_production.table
where pk not in (select pk from database_temporary.table)
Notes:
- This assumes that there is a a row can be uniquely identified. This may be based on a single column, multiple columns or another mechanism.
- If your dataset is large, a
not exists mey perform better than not in. See What's the difference between NOT EXISTS vs. NOT IN vs. LEFT JOIN WHERE IS NULL? and NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL: SQL Server
An example not exists:
delete from database_production.table p
where not exists (select 1 from database_temporary.table t where t.pk = p.pk)
Performance Notes:
As pointed out by @mgonzalez in the comments on the question, you may want to use a timestamp column (something like last modified) for comparing/merging in general so that you vompare only changed rows. This does not apply to the delete specifically, you cannot use timestamp for the delete because, well, the row would not exist.
qid & accept id:
(23907556, 23907600)
query:
Copying data want to keep to a new table and then rename
soup:
The query to copy everything to the new table goes like this:
\nSELECT * INTO dbo.NewTable FROM dbo.OldTable WHERE [event id] <> 6030\n
\nThen:
\nALTER TABLE dbo.OldTable\n RENAME TO dbo.OldTable_History;\n
\nAnd:
\n ALTER TABLE dbo.NewTable \n RENAME TO dbo.OldTable;\n
\nIf you want to create the table manually do it then and after that run this:
\nINSERT INTO dbo.NewTable\nSELECT * FROM dbo.OldTable WHERE [event id] <> 6030\n
\n
soup wrap:
The query to copy everything to the new table goes like this:
SELECT * INTO dbo.NewTable FROM dbo.OldTable WHERE [event id] <> 6030
Then:
ALTER TABLE dbo.OldTable
RENAME TO dbo.OldTable_History;
And:
ALTER TABLE dbo.NewTable
RENAME TO dbo.OldTable;
If you want to create the table manually do it then and after that run this:
INSERT INTO dbo.NewTable
SELECT * FROM dbo.OldTable WHERE [event id] <> 6030
qid & accept id:
(23908145, 23908285)
query:
SQL Server - Change Date Format
soup:
Try like this
\nSELECT LEFT(DATENAME(dw, GETDATE()), 3) + ' , ' + CAST(Day(GetDate()) AS Varchar(10))\n
\n\nQuery would be like this
\nSELECT mydate,LEFT(DATENAME(dw, mydate), 3) + ' , ' + CAST(Day(mydate) AS Varchar(10)) As Date \nFrom tbl\n
\n\nO/P
\nMYDATE DATE\n2014-04-21 Mon ,21\n2014-04-22 Tue ,22\n2014-04-23 Wed ,23\n2014-04-24 Thu ,24\n
\n
soup wrap:
Try like this
SELECT LEFT(DATENAME(dw, GETDATE()), 3) + ' , ' + CAST(Day(GetDate()) AS Varchar(10))
Query would be like this
SELECT mydate,LEFT(DATENAME(dw, mydate), 3) + ' , ' + CAST(Day(mydate) AS Varchar(10)) As Date
From tbl
O/P
MYDATE DATE
2014-04-21 Mon ,21
2014-04-22 Tue ,22
2014-04-23 Wed ,23
2014-04-24 Thu ,24
qid & accept id:
(23924244, 23924333)
query:
How to identify duplicate rows having value within data range in oracle
soup:
You can use EXISTS for this:
\nselect * \nfrom yourtable y\nwhere exists (\n select 1\n from yourtable y2\n where y.id <> y2.id \n and y.name = y2.name\n and (y2.startfield between y.startfield and y.endfield\n or\n y.startfield between y2.startfield and y2.endfield))\n
\n\n- SQL Fiddle Demo
\n
\nI wasn't completely sure from your question if the end range had to be included as well. If so, you'll need to add that to the where criteria:
\nselect * \nfrom yourtable y\nwhere exists (\n select 1\n from yourtable y2\n where y.id <> y2.id \n and y.name = y2.name\n and ((y2.startfield > y.startfield and y2.endfield < y.endfield)\n or\n (y.startfield > y2.startfield and y.endfield < y2.endfield)))\n
\n
soup wrap:
You can use EXISTS for this:
select *
from yourtable y
where exists (
select 1
from yourtable y2
where y.id <> y2.id
and y.name = y2.name
and (y2.startfield between y.startfield and y.endfield
or
y.startfield between y2.startfield and y2.endfield))
I wasn't completely sure from your question if the end range had to be included as well. If so, you'll need to add that to the where criteria:
select *
from yourtable y
where exists (
select 1
from yourtable y2
where y.id <> y2.id
and y.name = y2.name
and ((y2.startfield > y.startfield and y2.endfield < y.endfield)
or
(y.startfield > y2.startfield and y.endfield < y2.endfield)))
qid & accept id:
(23948815, 23948894)
query:
SQL: How to find product codes
soup:
Use substring() to extract the produce code, and group-by with having to find the hits:
\nselect substring(product_id, 5, len(product_id)) code\nfrom products\ngroup by substring(product_id, 5, len(product_id))\nhaving count(*) > 1\n
\nIf you want a specific one, add a where clause:
\nselect substring(product_id, 5, len(product_id)) code\nfrom products\nwhere substring(product_id, 5, len(product_id)) = '0700400B'\ngroup by substring(product_id, 5, len(product_id))\nhaving count(*) > 1\n
\n
soup wrap:
Use substring() to extract the produce code, and group-by with having to find the hits:
select substring(product_id, 5, len(product_id)) code
from products
group by substring(product_id, 5, len(product_id))
having count(*) > 1
If you want a specific one, add a where clause:
select substring(product_id, 5, len(product_id)) code
from products
where substring(product_id, 5, len(product_id)) = '0700400B'
group by substring(product_id, 5, len(product_id))
having count(*) > 1
qid & accept id:
(23950035, 23950156)
query:
How would you select records from a table based on the difference between 'created' dates with MySQL?
soup:
slower option
\nSELECT id, TIME_TO_SEC(TIMEDIFF(MAX(created_at),MIN(created_at))) as seconds_difference\nFROM table\nGROUP BY id\nHAVING seconds_difference > 3600*24\n
\nfaster option
\nSELECT t1.id, TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) as seconds_difference\nFROM table t1\nINNER JOIN table t2 ON (t2.id = t1.id AND t2.created_at > t1.created_at)\nWHERE TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) > 3600*24\n
\n
soup wrap:
slower option
SELECT id, TIME_TO_SEC(TIMEDIFF(MAX(created_at),MIN(created_at))) as seconds_difference
FROM table
GROUP BY id
HAVING seconds_difference > 3600*24
faster option
SELECT t1.id, TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) as seconds_difference
FROM table t1
INNER JOIN table t2 ON (t2.id = t1.id AND t2.created_at > t1.created_at)
WHERE TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) > 3600*24
qid & accept id:
(23954139, 23955928)
query:
SSRS report to show missing/ NULL entries Mon to Fri.
soup:
SQL is still the best way to get all the data you need. What I would recommend is creating a temp table with the limited values list you want, for instance Monday, Tuesday, etc. Then you can use the apply operator against your data table and get the not matching day values.
\nSELECT * FROM Days D \nOUTER APPLY \n ( \n SELECT * FROM Orders E \n WHERE DATEPART(wd,e.OrderDate) = D.DayName\n ) A \n
\nWould return something like:
\nDayName OrderCount Amount\nMonday 2 50.00\nTuesday NULL NULL\nWednesday 5 125.00\nThursday NULL NULL\nFriday 7 225.00\n
\nBelow you can find an article on the apply operators that you can use:
\n\n
soup wrap:
SQL is still the best way to get all the data you need. What I would recommend is creating a temp table with the limited values list you want, for instance Monday, Tuesday, etc. Then you can use the apply operator against your data table and get the not matching day values.
SELECT * FROM Days D
OUTER APPLY
(
SELECT * FROM Orders E
WHERE DATEPART(wd,e.OrderDate) = D.DayName
) A
Would return something like:
DayName OrderCount Amount
Monday 2 50.00
Tuesday NULL NULL
Wednesday 5 125.00
Thursday NULL NULL
Friday 7 225.00
Below you can find an article on the apply operators that you can use:
qid & accept id:
(23959544, 23962356)
query:
Sliding, variable "window" with highest density of rows
soup:
Lets start with a table definition and some INSERT statements. This reflects your data before you changed the question.
\ncreate table log_test (\n datetime date not null,\n action varchar(15) not null,\n username varchar(15) not null,\n primary key (datetime, action, username)\n);\n\ninsert into log_test values\n('2013-09-30', 'update', 'username'),\n('2013-12-15', 'update', 'username'),\n('2014-03-01', 'update', 'username'),\n('2014-03-02', 'update', 'username'),\n('2014-03-03', 'update', 'username'),\n('2014-03-05', 'update', 'username'),\n('2015-05-20', 'update', 'username');\n
\nNow we build a table of integers. This kind of table is useful in many ways; mine has several million rows in it. (There are ways to automate the insert statements.)
\ncreate table integers (\n n integer not null,\n primary key n\n);\ninsert into n values \n (0), (1), (2), (3), (4), (5), (6), (7), (8), (9),\n(10), (11), (12), (13), (14), (15), (16), (17), (18), (19),\n(20), (21), (22), (23), (24), (25), (26), (27), (28), (29),\n(30), (31), (32), (33), (34), (35), (36), (37), (38), (39),\n(40), (41), (42), (43), (44), (45), (46), (47), (48), (49);\n
\nThis statement gives us the dates from log_test, along with the number of days in the "window" we want to look at. You need to select distinct, because there might be multiple users with the same dates.
\nselect distinct datetime, t.n\nfrom log_test\ncross join (select n from integers where n between 10 and 40) t\norder by datetime, t.n;\n
\n\ndatetime n\n--\n2013-09-30 10\n2013-09-30 11\n2013-09-30 12\n...\n2015-05-20 39\n2015-05-20 40\n
\nWe can use that result as a derived table, and do date arithmetic on it.
\nselect datetime period_start, datetime + interval t2.n day period_end\nfrom (\n select distinct datetime, t.n\n from log_test\n cross join (select n from integers where n between 10 and 40) t ) t2\norder by period_start, period_end;\n
\n\nperiod_start period_end\n--\n2013-09-30 2013-10-10\n2013-09-30 2013-10-11\n2013-09-30 2013-10-12\n...\n2015-05-20 2015-06-28\n2015-05-20 2015-06-29\n
\nThese intervals are off by one; 2013-09-30 to 2013-10-10 has 11 days. I'll leave that repair up to you.
\nThe next version counts the number of "happenings" in each period. In your case, as the question was originally written, we just need to count the number of rows in each period.
\nselect username, t3.period_start, t3.period_end, count(datetime) num_rows\nfrom log_test\ninner join (\n select datetime period_start, datetime + interval t2.n day period_end\n from (\n select distinct datetime, t.n\n from log_test\n cross join (select n from integers where n between 10 and 40) t ) t2\n order by period_start, period_end ) t3\non log_test.datetime between t3.period_start and t3.period_end\ngroup by username, t3.period_start, t3.period_end\norder by username, t3.period_start, t3.period_end;\n
\n\nusername period_start period_end num_rows\n--\nusername 2013-09-30 2013-10-10 1\nusername 2013-09-30 2013-10-11 1\nusername 2013-09-30 2013-10-12 1\n...\nusername 2014-03-01 2014-03-11 4\nusername 2014-03-01 2014-03-12 4\n...\nusername 2015-05-20 2015-06-28 1\nusername 2015-05-20 2015-06-29 1\n
\nFinally, we can work some arithmetic magic, and get the density of each "window".
\nselect username, \n t3.period_start, t3.period_end, t3.n, \n count(datetime) num_rows,\n count(datetime)/t3.n density\nfrom log_test\ninner join (\n select datetime period_start, t2.n, datetime + interval t2.n day period_end\n from (\n select distinct datetime, t.n\n from log_test\n cross join (select n from integers where n between 10 and 40) t ) t2\n order by period_start, period_end ) t3\non log_test.datetime between t3.period_start and t3.period_end\ngroup by username, t3.period_start, t3.period_end, t3.n\norder by username, density desc;\n
\n\nusername period_start period_end n num_rows density\n--\nusername 2014-03-01 2014-03-11 10 4 0.4000\nusername 2014-03-01 2014-03-12 11 4 0.3636\nusername 2014-03-01 2014-03-13 12 4 0.3333\n...\n
\nSuggestions for refinement
\nYou might want to change the date arithmetic. As it stands, these queries simply add 'n' days to the dates in the test table. But that means the periods won't be symmetric around gaps. For example, the date 2014-03-01 appears after a long gap. As it stands now, we don't try to evaluate the density of a "window" that ends on 2014-03-01 (a "window" that comes at the first value in a gap from before it). This might be worth thinking through for your application.
\n
soup wrap:
Lets start with a table definition and some INSERT statements. This reflects your data before you changed the question.
create table log_test (
datetime date not null,
action varchar(15) not null,
username varchar(15) not null,
primary key (datetime, action, username)
);
insert into log_test values
('2013-09-30', 'update', 'username'),
('2013-12-15', 'update', 'username'),
('2014-03-01', 'update', 'username'),
('2014-03-02', 'update', 'username'),
('2014-03-03', 'update', 'username'),
('2014-03-05', 'update', 'username'),
('2015-05-20', 'update', 'username');
Now we build a table of integers. This kind of table is useful in many ways; mine has several million rows in it. (There are ways to automate the insert statements.)
create table integers (
n integer not null,
primary key n
);
insert into n values
(0), (1), (2), (3), (4), (5), (6), (7), (8), (9),
(10), (11), (12), (13), (14), (15), (16), (17), (18), (19),
(20), (21), (22), (23), (24), (25), (26), (27), (28), (29),
(30), (31), (32), (33), (34), (35), (36), (37), (38), (39),
(40), (41), (42), (43), (44), (45), (46), (47), (48), (49);
This statement gives us the dates from log_test, along with the number of days in the "window" we want to look at. You need to select distinct, because there might be multiple users with the same dates.
select distinct datetime, t.n
from log_test
cross join (select n from integers where n between 10 and 40) t
order by datetime, t.n;
datetime n
--
2013-09-30 10
2013-09-30 11
2013-09-30 12
...
2015-05-20 39
2015-05-20 40
We can use that result as a derived table, and do date arithmetic on it.
select datetime period_start, datetime + interval t2.n day period_end
from (
select distinct datetime, t.n
from log_test
cross join (select n from integers where n between 10 and 40) t ) t2
order by period_start, period_end;
period_start period_end
--
2013-09-30 2013-10-10
2013-09-30 2013-10-11
2013-09-30 2013-10-12
...
2015-05-20 2015-06-28
2015-05-20 2015-06-29
These intervals are off by one; 2013-09-30 to 2013-10-10 has 11 days. I'll leave that repair up to you.
The next version counts the number of "happenings" in each period. In your case, as the question was originally written, we just need to count the number of rows in each period.
select username, t3.period_start, t3.period_end, count(datetime) num_rows
from log_test
inner join (
select datetime period_start, datetime + interval t2.n day period_end
from (
select distinct datetime, t.n
from log_test
cross join (select n from integers where n between 10 and 40) t ) t2
order by period_start, period_end ) t3
on log_test.datetime between t3.period_start and t3.period_end
group by username, t3.period_start, t3.period_end
order by username, t3.period_start, t3.period_end;
username period_start period_end num_rows
--
username 2013-09-30 2013-10-10 1
username 2013-09-30 2013-10-11 1
username 2013-09-30 2013-10-12 1
...
username 2014-03-01 2014-03-11 4
username 2014-03-01 2014-03-12 4
...
username 2015-05-20 2015-06-28 1
username 2015-05-20 2015-06-29 1
Finally, we can work some arithmetic magic, and get the density of each "window".
select username,
t3.period_start, t3.period_end, t3.n,
count(datetime) num_rows,
count(datetime)/t3.n density
from log_test
inner join (
select datetime period_start, t2.n, datetime + interval t2.n day period_end
from (
select distinct datetime, t.n
from log_test
cross join (select n from integers where n between 10 and 40) t ) t2
order by period_start, period_end ) t3
on log_test.datetime between t3.period_start and t3.period_end
group by username, t3.period_start, t3.period_end, t3.n
order by username, density desc;
username period_start period_end n num_rows density
--
username 2014-03-01 2014-03-11 10 4 0.4000
username 2014-03-01 2014-03-12 11 4 0.3636
username 2014-03-01 2014-03-13 12 4 0.3333
...
Suggestions for refinement
You might want to change the date arithmetic. As it stands, these queries simply add 'n' days to the dates in the test table. But that means the periods won't be symmetric around gaps. For example, the date 2014-03-01 appears after a long gap. As it stands now, we don't try to evaluate the density of a "window" that ends on 2014-03-01 (a "window" that comes at the first value in a gap from before it). This might be worth thinking through for your application.
qid & accept id:
(23963860, 23965305)
query:
Making ID attributes unique in XML
soup:
In your environment you can use XSLT 1.0 to transform the document and generate IDs during the process. See: DBMS_XSLPROCESSOR.
\nWith a XSLT stylesheet you can copy the nodes from your XML source to a result tree, creating unique IDs in the process. The IDs will not be sequential numbers, but unique string sequences generated by the generate-id() method. You can't control what they look like, but you can guarantee they are unique. (XSLT also allows you to get rid of duplicate nodes (using a key) if that's your intention, but from your example I understood that duplicate *ID*s doesn't actually mean the node is a duplicate, since you want to generate a new ID for it.)
\nThe stylesheet below has two templates. The second one is an identity transform: it simply copies elements and attributes to the result tree. The first template creates an attribute named id containing an unique ID.
\n\n \n \n\n \n \n \n \n \n \n \n \n\n \n \n \n \n \n\n \n
\nThe other templates (in this case only the identity template) are called for all nodes and attributes, except the id attribute by . The result is a copy of your original XML file with generated unique IDs for the book elements.
\nIf you had a XML such as this one:
\n\n \n \n \n \n \n Text \n \n \n \n \n \n \n \n
\nthe XSLT above would transform it into this XML:
\n\n \n \n \n \n \n Text \n \n \n \n \n \n \n \n
\n(the string sequences are arbitrary, and might be different in your implementation).
\nFor creating ID/IDREF links the generated string sequences are better than numbers since you can use them anywhere (numbers and identifiers that start with numbers can't always be used as IDs). But if string sequences are not acceptable and you need sequential numbers, you can use XPath node position() in XQuery or XSLT to generate a number based on the element's position in the whole document (which will be unique). If all books are siblings in the same context, you can simply replace the generate-id(.) in the stylesheet above for position():
\n\n \n \n \n \n \n \n \n
\n(if the books are not siblings, you will need to do it in a slightly different way, using a variable).
\nIf you want to retain the existing IDs and only generate sequential ones for the duplicates, it will be a bit more complicated but you can achieve that with keys (or XQuery instead of XSLT). The maximum id can be obtained in XPath 2.0 using the max() function:
\nmax(//book/@id)\n
\nThat function does not exist in XPath 1.0, but you can obtain the maximum ID by using:
\n//book[not(@id < //book/@id)]/@id\n
\n
soup wrap:
In your environment you can use XSLT 1.0 to transform the document and generate IDs during the process. See: DBMS_XSLPROCESSOR.
With a XSLT stylesheet you can copy the nodes from your XML source to a result tree, creating unique IDs in the process. The IDs will not be sequential numbers, but unique string sequences generated by the generate-id() method. You can't control what they look like, but you can guarantee they are unique. (XSLT also allows you to get rid of duplicate nodes (using a key) if that's your intention, but from your example I understood that duplicate *ID*s doesn't actually mean the node is a duplicate, since you want to generate a new ID for it.)
The stylesheet below has two templates. The second one is an identity transform: it simply copies elements and attributes to the result tree. The first template creates an attribute named id containing an unique ID.
The other templates (in this case only the identity template) are called for all nodes and attributes, except the id attribute by . The result is a copy of your original XML file with generated unique IDs for the book elements.
If you had a XML such as this one:
Text
the XSLT above would transform it into this XML:
Text
(the string sequences are arbitrary, and might be different in your implementation).
For creating ID/IDREF links the generated string sequences are better than numbers since you can use them anywhere (numbers and identifiers that start with numbers can't always be used as IDs). But if string sequences are not acceptable and you need sequential numbers, you can use XPath node position() in XQuery or XSLT to generate a number based on the element's position in the whole document (which will be unique). If all books are siblings in the same context, you can simply replace the generate-id(.) in the stylesheet above for position():
(if the books are not siblings, you will need to do it in a slightly different way, using a variable).
If you want to retain the existing IDs and only generate sequential ones for the duplicates, it will be a bit more complicated but you can achieve that with keys (or XQuery instead of XSLT). The maximum id can be obtained in XPath 2.0 using the max() function:
max(//book/@id)
That function does not exist in XPath 1.0, but you can obtain the maximum ID by using:
//book[not(@id < //book/@id)]/@id
qid & accept id:
(23992536, 23992643)
query:
Extract Date from VARCHAR string ORacle
soup:
By extract, do you mean something like:
\nDECLARE\n match VARCHAR2(255);\nBEGIN\n match := REGEXP_SUBSTR(subject, '\d{2}-\w{3}-\d{4}', 1, 1, 'im');\nEND;\n
\nExplain Regex
\n\d{2} # digits (0-9) (2 times)\n- # '-'\n\w{3} # word characters (a-z, A-Z, 0-9, _) (3\n # times)\n- # '-'\n\d{4} # digits (0-9) (4 times)\n
\n
soup wrap:
By extract, do you mean something like:
DECLARE
match VARCHAR2(255);
BEGIN
match := REGEXP_SUBSTR(subject, '\d{2}-\w{3}-\d{4}', 1, 1, 'im');
END;
Explain Regex
\d{2} # digits (0-9) (2 times)
- # '-'
\w{3} # word characters (a-z, A-Z, 0-9, _) (3
# times)
- # '-'
\d{4} # digits (0-9) (4 times)
qid & accept id:
(23997222, 23997311)
query:
Select by a key for all associated records from a denormalizing database
soup:
This statement will probably prevent everything from working:
\nEXEC ('SELECT * FROM '+ @tablename +'where EmployeeID = 102')\n
\nYou need a space after the table name:
\nEXEC ('SELECT * FROM '+ @tablename +' where EmployeeID = 102')\n
\nIn addition, your cursor logic seems off. You should be checking for @@FETCH_STATUS and then closing and deallocating the cursor.
\nFollow the example at the end of the documentation.
\n
soup wrap:
This statement will probably prevent everything from working:
EXEC ('SELECT * FROM '+ @tablename +'where EmployeeID = 102')
You need a space after the table name:
EXEC ('SELECT * FROM '+ @tablename +' where EmployeeID = 102')
In addition, your cursor logic seems off. You should be checking for @@FETCH_STATUS and then closing and deallocating the cursor.
Follow the example at the end of the documentation.
qid & accept id:
(24012213, 24015752)
query:
COUNT on Sub Query and Join
soup:
In the first query you group by ids, in the second by names. So the first query gives you counts per customer and product, whereas the second query gives you counts per equally named customers and equally named products.
\nExample:
\nuser 1 = John, user 2 = John\nproduct a = toy, product b = toy\norders: 1 a, 1 a, 1 b, 2 a\n
\nquery 1:
\n2, John, toy\n1, John, toy\n1, John, toy\n
\nquery 2:
\n4, John, toy\n
\n
soup wrap:
In the first query you group by ids, in the second by names. So the first query gives you counts per customer and product, whereas the second query gives you counts per equally named customers and equally named products.
Example:
user 1 = John, user 2 = John
product a = toy, product b = toy
orders: 1 a, 1 a, 1 b, 2 a
query 1:
2, John, toy
1, John, toy
1, John, toy
query 2:
4, John, toy
qid & accept id:
(24035933, 24036260)
query:
Select a record just if the one before it has a lower value takes too long and fail
soup:
Here's a solution for your question 1 which will run much faster, since you have many full table scans and dependent subqueries. Here you will at most have just one table scan (and maybe a temporary table, depending how large your data is and how much memory you've got). I think you can easily adjust it to your question here. Question 2 (I haven't read it really) is probably also answered since it's easy now to just add where date_column = whatever
\nselect * from (\n select\n t.*,\n if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n @prev_sn := SerialNumber,\n @prev_toner := Remain_Toner_Black\n from\n Table1 t\n , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber limit 1) var_init\n order by SerialNumber, id\n) sq \nwhere select_it = 1\n
\n\n- see it working live in an sqlfiddle
\n
\nEDIT:
\nExplanation:
\nWith this line
\n , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber \n
\nwe just initialize the variables @prev_toner and @prev_sn on the fly. It's the same as not having this line in the query at all but writing in front of the query
\nSET @prev_toner = 0;\nSET @prev_sn = (select serialnumber from your_table order by serialnumber limit 1);\nSELECT ...\n
\nSo, why do the query to assign a value to @prev_sn and why order by serialnumber? The order by is very important. Without an order by there's no guaranteed order in which rows are returned. Also we will access the previous rows value with variables, so it's important that same serial numbers are "grouped together".
\nThe columns in the select clause are evaluated one after another, so it's important that you first select this line
\nif(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n
\nbefore you select these two lines
\n@prev_sn := SerialNumber,\n@prev_toner := Remain_Toner_Black\n
\nWhy is that? The last two lines assign just the values of the current rows to the variables. Therefor in this line
\nif(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n
\nthe variables still hold the values of the previous rows. And what we do here is nothing more than saying "if the previous rows value in column Remain_Toner_Black is smaller than the one in the current row and the previous rows serial number is the same as the actual rows serial number, return 1, else return 0."
\nThen we can simply say in the outer query "select every row, where the above returned 1".
\nGiven your query, you don't need all these subqueries. They are very expensive and unnecessary. Actually it's quite insane. In this part of the query
\n SELECT a.ID, \n a.Time, \n a.SerialNumber, \n a.Remain_Toner_Black,\n a.Remain_Toner_Cyan,\n a.Remain_Toner_Magenta,\n a.Remain_Toner_Yellow,\n (\n SELECT COUNT(*)\n FROM Reports c\n WHERE c.SerialNumber = a.SerialNumber AND\n c.ID <= a.ID) AS RowNumber\n FROM Reports a\n
\nyou select the whole table and for every row you count the rows within that group. That's a dependent subquery. All just to have some sort of row number. Then you do this a second time, just so you can join those two temporary tables to get the previous row. Really, no wonder the performance is horrible.
\nSo, how to adjust my solution to your query? Instead of the one variable I used to get the previous row for Remain_Toner_Black use four for the colours black, cyan, magenta and yellow. And just join the Printers and Customers table like you did already. Don't forget the order by and you're done.
\n
soup wrap:
Here's a solution for your question 1 which will run much faster, since you have many full table scans and dependent subqueries. Here you will at most have just one table scan (and maybe a temporary table, depending how large your data is and how much memory you've got). I think you can easily adjust it to your question here. Question 2 (I haven't read it really) is probably also answered since it's easy now to just add where date_column = whatever
select * from (
select
t.*,
if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
@prev_sn := SerialNumber,
@prev_toner := Remain_Toner_Black
from
Table1 t
, (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber limit 1) var_init
order by SerialNumber, id
) sq
where select_it = 1
- see it working live in an sqlfiddle
EDIT:
Explanation:
With this line
, (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber
we just initialize the variables @prev_toner and @prev_sn on the fly. It's the same as not having this line in the query at all but writing in front of the query
SET @prev_toner = 0;
SET @prev_sn = (select serialnumber from your_table order by serialnumber limit 1);
SELECT ...
So, why do the query to assign a value to @prev_sn and why order by serialnumber? The order by is very important. Without an order by there's no guaranteed order in which rows are returned. Also we will access the previous rows value with variables, so it's important that same serial numbers are "grouped together".
The columns in the select clause are evaluated one after another, so it's important that you first select this line
if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
before you select these two lines
@prev_sn := SerialNumber,
@prev_toner := Remain_Toner_Black
Why is that? The last two lines assign just the values of the current rows to the variables. Therefor in this line
if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
the variables still hold the values of the previous rows. And what we do here is nothing more than saying "if the previous rows value in column Remain_Toner_Black is smaller than the one in the current row and the previous rows serial number is the same as the actual rows serial number, return 1, else return 0."
Then we can simply say in the outer query "select every row, where the above returned 1".
Given your query, you don't need all these subqueries. They are very expensive and unnecessary. Actually it's quite insane. In this part of the query
SELECT a.ID,
a.Time,
a.SerialNumber,
a.Remain_Toner_Black,
a.Remain_Toner_Cyan,
a.Remain_Toner_Magenta,
a.Remain_Toner_Yellow,
(
SELECT COUNT(*)
FROM Reports c
WHERE c.SerialNumber = a.SerialNumber AND
c.ID <= a.ID) AS RowNumber
FROM Reports a
you select the whole table and for every row you count the rows within that group. That's a dependent subquery. All just to have some sort of row number. Then you do this a second time, just so you can join those two temporary tables to get the previous row. Really, no wonder the performance is horrible.
So, how to adjust my solution to your query? Instead of the one variable I used to get the previous row for Remain_Toner_Black use four for the colours black, cyan, magenta and yellow. And just join the Printers and Customers table like you did already. Don't forget the order by and you're done.
qid & accept id:
(24040834, 24042008)
query:
converting sysdate to datetime format
soup:
There is a little trick because of the T inside your format, so you have to cut it in two:
\nwith w as\n(\n select sysdate d from dual\n)\nselect to_char(w.d, 'yyyy-mm-dd') || 'T' || to_char(w.d, 'hh24:mi:ss')\nfrom w;\n
\nEDIT : A better way exists in a single call to to_char, as shown in this other SO post:
\nselect to_char(sysdate, 'yyyy-mm-dd"T"hh24:mi:ss') from dual;\n
\n
soup wrap:
There is a little trick because of the T inside your format, so you have to cut it in two:
with w as
(
select sysdate d from dual
)
select to_char(w.d, 'yyyy-mm-dd') || 'T' || to_char(w.d, 'hh24:mi:ss')
from w;
EDIT : A better way exists in a single call to to_char, as shown in this other SO post:
select to_char(sysdate, 'yyyy-mm-dd"T"hh24:mi:ss') from dual;
qid & accept id:
(24040926, 24041132)
query:
SQL Query Hotel Room from two tables (Type and Availability)
soup:
`SELECT * from Room R\nINNER JOIN Booking B on B.Room_ID = R.Room_ID\nwhere Room_Floor = 1\nAND From_date BETWEEN GETDATE() AND To_date\n`\n
\nThis will find all bookings for rooms on Floor 1
\n`SELECT * from Room R\nwhere not exists (select * from bookings where Room_ID = R.RoomID and GETDATE()\nBetween From_date AND To_date)\nand Room_Floor = 2`\n
\nThis will find all available rooms on floor 2
\nSomething like that I think
\n
soup wrap:
`SELECT * from Room R
INNER JOIN Booking B on B.Room_ID = R.Room_ID
where Room_Floor = 1
AND From_date BETWEEN GETDATE() AND To_date
`
This will find all bookings for rooms on Floor 1
`SELECT * from Room R
where not exists (select * from bookings where Room_ID = R.RoomID and GETDATE()
Between From_date AND To_date)
and Room_Floor = 2`
This will find all available rooms on floor 2
Something like that I think
qid & accept id:
(24082669, 24083105)
query:
Finding Unknown XML Grandchildren Using SQL
soup:
To get all nodes not only from the first level use /form//* with // instead of /form/*
\nSELECT distinct Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item'\n FROM dbo.FormResults \n CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)\n
\n\nTo get also parent nodes use syntax ../. in local-name() call.\nTo get an Index of child inside a parent node and order by it you can use XQuery expression
\nfor $i in . return count(../*[. << $i])\n
\nSo the final query with order:
\nSELECT distinct \n Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item',\n Parent.Items.value('local-name(../.)', 'varchar(100)') as 'ParentItem',\n Parent.Items.value('for $i in . return count(../*[. << $i])','int') \n as ChildIndex\n FROM dbo.FormResults \n CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)\n ORDER BY ParentItem,ChildIndex\n
\n\n
soup wrap:
To get all nodes not only from the first level use /form//* with // instead of /form/*
SELECT distinct Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item'
FROM dbo.FormResults
CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)
To get also parent nodes use syntax ../. in local-name() call.
To get an Index of child inside a parent node and order by it you can use XQuery expression
for $i in . return count(../*[. << $i])
So the final query with order:
SELECT distinct
Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item',
Parent.Items.value('local-name(../.)', 'varchar(100)') as 'ParentItem',
Parent.Items.value('for $i in . return count(../*[. << $i])','int')
as ChildIndex
FROM dbo.FormResults
CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)
ORDER BY ParentItem,ChildIndex
qid & accept id:
(24116066, 24137252)
query:
Database schema for private messages with many different types of users
soup:
\nYou should have a single range of userids that spans all four groups.\n Then you only need a single table for all message types. – Thilo
\n
\nThis gives tables and statements. A table contains the rows that make its statement true.
\n// assumes teacher(tid,...),student(sid,...),admin(aid,...),parent(pid,...)\n\nuser(uid) -- user [uid] is teacher or student or admin or parent\nuser_is_teacher(uid,tid) -- user [uid] is teacher [tid]\nuser_is_student(uid,sid) -- user [uid] is student [sid]\nuser_is_admin(uid,aid) -- user [uid] is admin [aid]\nuser_is_parent(uid,pid) -- user [uid] is parent [pid]\n\nuser_is_term_student(uid) -- user [uid] is term student\nuser_is_course student(uid) -- user [uid] is course student\n\nmessage_was_sent(mid,sid,rid,date,...) -- message [mid] was sent by user [sid] to user [rid] at [date] ...\nmessage_was_private(mid) -- message [mid] was private\n
\n(Observe that if you had just made statements about user ids you would have discovered they are straightforward not impossible.)
\nA superkey is columns with a unique value. A key is a superkey containing no superkey. Figure them out. Here are some:
\nuser_is_teacher keys (uid),(tid)\nmessage_was_sent key mid,(sid,rid,date)\n
\nA foreign key is columns whose value is a value of some key columns. Figure them out. Here are some:
\nuser_is_teacher fk uid to user uid, fk tid to teacher tid\nmessage_was_sent fk sid to user uid, rid to user uid\n
\nSuggest you write every design in this format.
\n
soup wrap:
You should have a single range of userids that spans all four groups.
Then you only need a single table for all message types. – Thilo
This gives tables and statements. A table contains the rows that make its statement true.
// assumes teacher(tid,...),student(sid,...),admin(aid,...),parent(pid,...)
user(uid) -- user [uid] is teacher or student or admin or parent
user_is_teacher(uid,tid) -- user [uid] is teacher [tid]
user_is_student(uid,sid) -- user [uid] is student [sid]
user_is_admin(uid,aid) -- user [uid] is admin [aid]
user_is_parent(uid,pid) -- user [uid] is parent [pid]
user_is_term_student(uid) -- user [uid] is term student
user_is_course student(uid) -- user [uid] is course student
message_was_sent(mid,sid,rid,date,...) -- message [mid] was sent by user [sid] to user [rid] at [date] ...
message_was_private(mid) -- message [mid] was private
(Observe that if you had just made statements about user ids you would have discovered they are straightforward not impossible.)
A superkey is columns with a unique value. A key is a superkey containing no superkey. Figure them out. Here are some:
user_is_teacher keys (uid),(tid)
message_was_sent key mid,(sid,rid,date)
A foreign key is columns whose value is a value of some key columns. Figure them out. Here are some:
user_is_teacher fk uid to user uid, fk tid to teacher tid
message_was_sent fk sid to user uid, rid to user uid
Suggest you write every design in this format.
qid & accept id:
(24170440, 24170540)
query:
SQL set column to row count
soup:
You can fetch the count of Cars that belong to a driver, along with all Driver data with the following SELECT query:
\nSELECT *\n ,(\n SELECT COUNT(*)\n FROM Cars c\n WHERE c.DriverID = d.DriverID\n )\nFROM Driver d\n
\nYou can UPDATE the NumCars column with the following statement:
\nUPDATE Driver\nSET NumCars = (\n SELECT COUNT(*)\n FROM Cars\n WHERE Driver.DriverID = Cars.DriverID\n )\n
\n
soup wrap:
You can fetch the count of Cars that belong to a driver, along with all Driver data with the following SELECT query:
SELECT *
,(
SELECT COUNT(*)
FROM Cars c
WHERE c.DriverID = d.DriverID
)
FROM Driver d
You can UPDATE the NumCars column with the following statement:
UPDATE Driver
SET NumCars = (
SELECT COUNT(*)
FROM Cars
WHERE Driver.DriverID = Cars.DriverID
)
qid & accept id:
(24194784, 24194895)
query:
Get SQL Results Between Specific Weekdays and Times
soup:
Just add hours:
\nBETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))\n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 11, GETDATE()), 11))\n
\nIf you need to get results within working hours for each day you need to set the time ranges separately:
\nmyDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR \nmyDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR \nmyDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.\n
\nUpdate: if you have other conditions that follows the date/time condition in your WHERE clause do not forget to enclose the conditions with OR operator into brackets:
\nWHERE\n(myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR \n myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR \n myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) \n AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.\n) AND Direction = 1 AND VMDuration = 0 AND ... etc.\n
\nRead about SQL Server operator precedence here for more information
\n
soup wrap:
Just add hours:
BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 11, GETDATE()), 11))
If you need to get results within working hours for each day you need to set the time ranges separately:
myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR
myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR
myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.
Update: if you have other conditions that follows the date/time condition in your WHERE clause do not forget to enclose the conditions with OR operator into brackets:
WHERE
(myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR
myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR
myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9))
AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.
) AND Direction = 1 AND VMDuration = 0 AND ... etc.
Read about SQL Server operator precedence here for more information
qid & accept id:
(24207240, 24210673)
query:
Recursive CTE with alternating tables
soup:
Here is a recursive example that I believe meets your criteria. I added a ParentId to the result set, which will be NULL for the root/base file, since it does not have a parent.
\ndeclare @BaseTableId int;\nset @BaseTableId = 1;\n\n; WITH cteRecursive as (\n --anchor/root parent file\n SELECT null as ParentFileId\n , f.FileId as ChildFileID\n , lt.RecursiveId \n , 0 as [level]\n , bt.BaseTableId\n FROM BaseTable bt\n INNER JOIN Files f\n on bt.BaseTableId = f.BaseTableId\n INNER JOIN LinkingTable lt\n on f.FileId = lt.FileId\n WHERE bt.BaseTableId = @BaseTableId \n\n UNION ALL \n\n SELECT cte.ChildFileID as ParentFileID \n , f.FileId as ChildFileID\n , lt.RecursiveId\n , cte.level + 1 as [level]\n , cte.BaseTableId\n FROM cteRecursive cte\n INNER JOIN Files f on cte.RecursiveId = f.RecursiveId\n INNER JOIN LinkingTable lt ON lt.FileId = f.FileId\n)\nSELECT * \nFROM cteRecursive\n;\n
\nResults for @BaseTableID = 1:
\nParentFileId ChildFileID RecursiveId level BaseTableId\n------------ ----------- ----------- ----------- -----------\nNULL 1 1 0 1\n1 3 2 1 1\n3 4 3 2 1\n
\nResults for @BaseTableID = 2:
\nParentFileId ChildFileID RecursiveId level BaseTableId\n------------ ----------- ----------- ----------- -----------\nNULL 2 1 0 2\nNULL 2 4 0 2\n2 6 5 1 2\n6 7 6 2 2\n2 3 2 1 2\n3 4 3 2 2\n
\n
soup wrap:
Here is a recursive example that I believe meets your criteria. I added a ParentId to the result set, which will be NULL for the root/base file, since it does not have a parent.
declare @BaseTableId int;
set @BaseTableId = 1;
; WITH cteRecursive as (
--anchor/root parent file
SELECT null as ParentFileId
, f.FileId as ChildFileID
, lt.RecursiveId
, 0 as [level]
, bt.BaseTableId
FROM BaseTable bt
INNER JOIN Files f
on bt.BaseTableId = f.BaseTableId
INNER JOIN LinkingTable lt
on f.FileId = lt.FileId
WHERE bt.BaseTableId = @BaseTableId
UNION ALL
SELECT cte.ChildFileID as ParentFileID
, f.FileId as ChildFileID
, lt.RecursiveId
, cte.level + 1 as [level]
, cte.BaseTableId
FROM cteRecursive cte
INNER JOIN Files f on cte.RecursiveId = f.RecursiveId
INNER JOIN LinkingTable lt ON lt.FileId = f.FileId
)
SELECT *
FROM cteRecursive
;
Results for @BaseTableID = 1:
ParentFileId ChildFileID RecursiveId level BaseTableId
------------ ----------- ----------- ----------- -----------
NULL 1 1 0 1
1 3 2 1 1
3 4 3 2 1
Results for @BaseTableID = 2:
ParentFileId ChildFileID RecursiveId level BaseTableId
------------ ----------- ----------- ----------- -----------
NULL 2 1 0 2
NULL 2 4 0 2
2 6 5 1 2
6 7 6 2 2
2 3 2 1 2
3 4 3 2 2
qid & accept id:
(24275420, 24279757)
query:
How to group multiple values into a single column in SQL
soup:
For older versions, I guess WM_CONCAT would work. Modifying Gordon Linoff's query:
\nSELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location",\n WM_CONCAT(T3."STOCK") as Bins\nFROM "XYZ"."PARTS" T1 JOIN\n "XYZ"."BALANCES" T2\n ON T2."PART_ID" = T1."PART_ID" JOIN\n "XYZ"."DETAILS" T3\n ON T3."PART_ID" = T1."PART_ID"\nGROUP BY t1.PN, t2.Branch\nORDER BY "Part Number", "Location";\n
\nAlso refer this link for an alternate approach: Including the answer in the link for refernce:
\ncreate table countries ( country_name varchar2 (100));\ninsert into countries values ('Albania');\ninsert into countries values ('Andorra');\ninsert into countries values ('Antigua');\n\n\nSELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv\n FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn,\n COUNT (*) OVER () cnt\n FROM countries)\n WHERE rn = cnt\nSTART WITH rn = 1\nCONNECT BY rn = PRIOR rn + 1;\n\nCSV \n--------------------------\nAlbania,Andorra,Antigua \n
\n
soup wrap:
For older versions, I guess WM_CONCAT would work. Modifying Gordon Linoff's query:
SELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location",
WM_CONCAT(T3."STOCK") as Bins
FROM "XYZ"."PARTS" T1 JOIN
"XYZ"."BALANCES" T2
ON T2."PART_ID" = T1."PART_ID" JOIN
"XYZ"."DETAILS" T3
ON T3."PART_ID" = T1."PART_ID"
GROUP BY t1.PN, t2.Branch
ORDER BY "Part Number", "Location";
Also refer this link for an alternate approach: Including the answer in the link for refernce:
create table countries ( country_name varchar2 (100));
insert into countries values ('Albania');
insert into countries values ('Andorra');
insert into countries values ('Antigua');
SELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv
FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn,
COUNT (*) OVER () cnt
FROM countries)
WHERE rn = cnt
START WITH rn = 1
CONNECT BY rn = PRIOR rn + 1;
CSV
--------------------------
Albania,Andorra,Antigua
qid & accept id:
(24287463, 24287526)
query:
Create SQL summary using union
soup:
It looks like you want to add
\n WITH ROLLUP\n
\nto the end of your query
\neg:
\nSelect sum(a) as col1, sum(b) as col2\nfrom yourtable\ngroup by something\nwith rollup\n
\nDepending on the full nature of your query, you may prefer to use with cube, which is similar. See http://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx
\n
soup wrap:
It looks like you want to add
WITH ROLLUP
to the end of your query
eg:
Select sum(a) as col1, sum(b) as col2
from yourtable
group by something
with rollup
Depending on the full nature of your query, you may prefer to use with cube, which is similar. See http://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx
qid & accept id:
(24291644, 24292276)
query:
Extracting first available number and its following text from a string
soup:
\nMS SQL Server 2008 Schema Setup:
\nCREATE TABLE Table1\n ([dosage] varchar(144))\n;\n\nINSERT INTO Table1\n ([dosage])\nVALUES\n ('Pain Medication. 20 mg/100 mL NS (0.2mg/mL) \n Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr, \n IV, Routine PCA Dose = 0.4 mg')\n;\n
\nQuery 1:
\nSELECT substring(dosage,\n PATINDEX('%[0-9]%',dosage),\n PATINDEX('%/%',dosage)-PATINDEX('%[0-9]%',dosage)\n )\nFROM Table1\n
\n\n| COLUMN_0 |\n|----------|\n| 20 mg |\n
\n
soup wrap:
MS SQL Server 2008 Schema Setup:
CREATE TABLE Table1
([dosage] varchar(144))
;
INSERT INTO Table1
([dosage])
VALUES
('Pain Medication. 20 mg/100 mL NS (0.2mg/mL)
Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr,
IV, Routine PCA Dose = 0.4 mg')
;
Query 1:
SELECT substring(dosage,
PATINDEX('%[0-9]%',dosage),
PATINDEX('%/%',dosage)-PATINDEX('%[0-9]%',dosage)
)
FROM Table1
| COLUMN_0 |
|----------|
| 20 mg |
qid & accept id:
(24310683, 24311027)
query:
Cursor? Loop? Aggregate up rows data along with row results
soup:
You can do this by using the GROUPING SETS extension of the GROUP BY clause:
\nSELECT Description, \n COALESCE(Parition, 'Total') AS Partition,\n SUM(Total) AS Total\nFROM MyTable\nGROUP BY GROUPING SETS ((Description, Partition), (Description));\n
\nor you could use:
\nSELECT Description, \n COALESCE(Parition, 'Total') AS Partition,\n SUM(Total) AS Total\nFROM MyTable\nGROUP BY ROLLUP (Description, Partition);\n
\nWithout ROLLUP, you can do this using UNION ALL:
\nSELECT Description, \n Parition,\n Total\nFROM MyTable\nUNION ALL\nSELECT Description, \n 'Total' AS Partition,\n SUM(Total) AS Total\nFROM MyTable\nGROUP BY Description;\n
\n
soup wrap:
You can do this by using the GROUPING SETS extension of the GROUP BY clause:
SELECT Description,
COALESCE(Parition, 'Total') AS Partition,
SUM(Total) AS Total
FROM MyTable
GROUP BY GROUPING SETS ((Description, Partition), (Description));
or you could use:
SELECT Description,
COALESCE(Parition, 'Total') AS Partition,
SUM(Total) AS Total
FROM MyTable
GROUP BY ROLLUP (Description, Partition);
Without ROLLUP, you can do this using UNION ALL:
SELECT Description,
Parition,
Total
FROM MyTable
UNION ALL
SELECT Description,
'Total' AS Partition,
SUM(Total) AS Total
FROM MyTable
GROUP BY Description;
qid & accept id:
(24316425, 24316488)
query:
strange calculate data on a table
soup:
You can do what you want with a cumulative sum. The following syntax is ANSI standard and should work (depending on the version of your database):
\nselect sum(a*(revcumb - b)) as a_sum, sum(b*(revcuma - a)) as b_sum\nfrom (select t.*,\n sum(b) over (order by id desc) as revcumb,\n sum(a) over (order by id desc) as revcuma\n from table t\n ) t;\n
\nNote that instead of using rows between or range between, this just subtracts the value in the current row from the (reverse) cumulative sum.
\nAlso note that this assumes the presence of an id column or some other column to specify the ordering of rows. SQL tables are inherently unordered, so you need a column to specify ordering, when that is important.
\nAnd, if you don't have cumulative sum (i.e. SQL Server < 2012), then you can do the same thing with correlated subqueries.
\nEDIT:
\nSybase may or may not support the above. There are so many different versions of that database that it is hardly worth anything as a tag. I think this will work on most versions:
\nselect sum(a*revcumb) as a_sum, sum(b*revcuma) as b_sum\nfrom (select t.*,\n (select sum(b) from table t2 where t2.id > t.id) as revcumb,\n (select sum(a) from table t2 where t2.id > t.id) as revcuma\n from table t\n ) t;\n
\n
soup wrap:
You can do what you want with a cumulative sum. The following syntax is ANSI standard and should work (depending on the version of your database):
select sum(a*(revcumb - b)) as a_sum, sum(b*(revcuma - a)) as b_sum
from (select t.*,
sum(b) over (order by id desc) as revcumb,
sum(a) over (order by id desc) as revcuma
from table t
) t;
Note that instead of using rows between or range between, this just subtracts the value in the current row from the (reverse) cumulative sum.
Also note that this assumes the presence of an id column or some other column to specify the ordering of rows. SQL tables are inherently unordered, so you need a column to specify ordering, when that is important.
And, if you don't have cumulative sum (i.e. SQL Server < 2012), then you can do the same thing with correlated subqueries.
EDIT:
Sybase may or may not support the above. There are so many different versions of that database that it is hardly worth anything as a tag. I think this will work on most versions:
select sum(a*revcumb) as a_sum, sum(b*revcuma) as b_sum
from (select t.*,
(select sum(b) from table t2 where t2.id > t.id) as revcumb,
(select sum(a) from table t2 where t2.id > t.id) as revcuma
from table t
) t;
qid & accept id:
(24342739, 24342931)
query:
Concatenate rows from a complex select in SQL
soup:
You can use CTE :
\nWITH cteTbl (NominationId, NominationOrderId, GiftName) AS ( Your Query here)\n
\nAnd then concatenate all rows with the same NominationId and NominationOrderId with FOR XML PATH('') and after that replace the first comma , with STUFF:
\nSELECT t.NominationId\n , t.NominationOrderId\n , STUFF( ( SELECT ', ' + GiftName\n FROM cteTbl\n WHERE NominationId = t.NominationId\n AND NominationOrderId = t.NominationOrderId\n ORDER BY GiftName DESC\n FOR XML PATH('') ), 1, 1, '')\nFROM cteTbl t \nGROUP BY t.NominationId\n , t.NominationOrderId\n
\nSQLFiddle
\n
soup wrap:
You can use CTE :
WITH cteTbl (NominationId, NominationOrderId, GiftName) AS ( Your Query here)
And then concatenate all rows with the same NominationId and NominationOrderId with FOR XML PATH('') and after that replace the first comma , with STUFF:
SELECT t.NominationId
, t.NominationOrderId
, STUFF( ( SELECT ', ' + GiftName
FROM cteTbl
WHERE NominationId = t.NominationId
AND NominationOrderId = t.NominationOrderId
ORDER BY GiftName DESC
FOR XML PATH('') ), 1, 1, '')
FROM cteTbl t
GROUP BY t.NominationId
, t.NominationOrderId
SQLFiddle
qid & accept id:
(24372541, 24373732)
query:
SQL PIVOT, JOIN, and aggregate function to generate report
soup:
Interesting. Pivot requires an aggregate function to build the 1-5 values, so you'll have to rewrite your inner query probably as a union, and use MAX() as a throwaway aggregate function (throwaway because every record should be unique, so MAX, MIN, SUM, etc. should all return the same value:
\nSELECT * INTO #newblah from (\n SELECT PersonFK, 1 as StrengthIndex, Strength1 as Strength from blah UNION ALL\n SELECT PersonFK, 2 as StrengthIndex, Strength2 as Strength from blah UNION ALL\n SELECT PersonFK, 3 as StrengthIndex, Strength3 as Strength from blah UNION ALL\n SELECT PersonFK, 4 as StrengthIndex, Strength4 as Strength from blah UNION ALL\n SELECT PersonFK, 5 as StrengthIndex, Strength5 as Strength from blah\n )\n
\nThen
\nselect PersonFK, [Achiever], [Activator], [Adaptability], [Analytical], [Belief] .....\nfrom\n(\n select PersonFK, StrengthIndex, Strength\n from #newblah\n) pivotsource\npivot\n(\n max(StrengthIndex)\n for Strength in ([Achiever], [Activator], [Adaptability], [Analytical], [Belief] ..... )\n) myPivot;\n
\nThe result of that query should be able to be joined back to your other tables to get the Person name, Strength Category, and Team name, so I'll leave that to you. You don't HAVE to do the first join as a temporary table -- you could do it as a subselect inline, so this could all be done in one SQL query, but that seems painful if you can avoid it.
\n
soup wrap:
Interesting. Pivot requires an aggregate function to build the 1-5 values, so you'll have to rewrite your inner query probably as a union, and use MAX() as a throwaway aggregate function (throwaway because every record should be unique, so MAX, MIN, SUM, etc. should all return the same value:
SELECT * INTO #newblah from (
SELECT PersonFK, 1 as StrengthIndex, Strength1 as Strength from blah UNION ALL
SELECT PersonFK, 2 as StrengthIndex, Strength2 as Strength from blah UNION ALL
SELECT PersonFK, 3 as StrengthIndex, Strength3 as Strength from blah UNION ALL
SELECT PersonFK, 4 as StrengthIndex, Strength4 as Strength from blah UNION ALL
SELECT PersonFK, 5 as StrengthIndex, Strength5 as Strength from blah
)
Then
select PersonFK, [Achiever], [Activator], [Adaptability], [Analytical], [Belief] .....
from
(
select PersonFK, StrengthIndex, Strength
from #newblah
) pivotsource
pivot
(
max(StrengthIndex)
for Strength in ([Achiever], [Activator], [Adaptability], [Analytical], [Belief] ..... )
) myPivot;
The result of that query should be able to be joined back to your other tables to get the Person name, Strength Category, and Team name, so I'll leave that to you. You don't HAVE to do the first join as a temporary table -- you could do it as a subselect inline, so this could all be done in one SQL query, but that seems painful if you can avoid it.
qid & accept id:
(24375773, 24377611)
query:
Replace each letter with it's ASCII code in a string in PL/SQL
soup:
I think you might be looking for something like this:
\nCREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n out_str VARCHAR2(4000) := '';\nBEGIN\n FOR i IN 1..LENGTH(in_str) LOOP\n out_str := out_str || TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55);\n END LOOP;\n RETURN out_str;\nEND FUBAR_STR;\n
\nSo when you run:
\nselect fubar_str('abcd') from dual;\n
\nYou get: 42434445.
\nHere is the reversible, safer one to use.
\nCREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n out_str VARCHAR2(32676) := '';\nBEGIN\n FOR i IN 1..LEAST(LENGTH(in_str),10892) LOOP\n out_str := out_str || LPAD(TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55),3,'0');\n END LOOP;\n RETURN out_str;\nEND FUBAR_STR;\n
\nSo when you run:
\nselect fubar_str('abcd') from dual;\n
\nYou get: 042043044045.
\nAnd because I'm really bored tonight:
\nCREATE OR REPLACE FUNCTION UNFUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n out_str VARCHAR2(10892) := '';\nBEGIN\n FOR i IN 0..(((LENGTH(in_str) - MOD(LENGTH(in_str),3))/3) - 1) LOOP\n out_str := out_str || CHR(TO_NUMBER(LTRIM(SUBSTR(in_str,(i * 3) + 1,3),'0')) + 55);\n END LOOP;\n RETURN out_str;\nEND UNFUBAR_STR;\n
\nSo when you run:
\nselect unfubar_str('042043044045') from dual;\n
\nYou get: abcd.
\n
soup wrap:
I think you might be looking for something like this:
CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
out_str VARCHAR2(4000) := '';
BEGIN
FOR i IN 1..LENGTH(in_str) LOOP
out_str := out_str || TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55);
END LOOP;
RETURN out_str;
END FUBAR_STR;
So when you run:
select fubar_str('abcd') from dual;
You get: 42434445.
Here is the reversible, safer one to use.
CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
out_str VARCHAR2(32676) := '';
BEGIN
FOR i IN 1..LEAST(LENGTH(in_str),10892) LOOP
out_str := out_str || LPAD(TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55),3,'0');
END LOOP;
RETURN out_str;
END FUBAR_STR;
So when you run:
select fubar_str('abcd') from dual;
You get: 042043044045.
And because I'm really bored tonight:
CREATE OR REPLACE FUNCTION UNFUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
out_str VARCHAR2(10892) := '';
BEGIN
FOR i IN 0..(((LENGTH(in_str) - MOD(LENGTH(in_str),3))/3) - 1) LOOP
out_str := out_str || CHR(TO_NUMBER(LTRIM(SUBSTR(in_str,(i * 3) + 1,3),'0')) + 55);
END LOOP;
RETURN out_str;
END UNFUBAR_STR;
So when you run:
select unfubar_str('042043044045') from dual;
You get: abcd.
qid & accept id:
(24391293, 24399074)
query:
Avoid multiple calls on same function when expanding composite result
soup:
A CTE is not even necessary. A plain subquery does the job as well (tested with pg 9.3):
\nSELECT i, (f).* -- decompose here\nFROM (\n SELECT i, (slow_func(i)) AS f -- do not decompose here\n FROM generate_series(1, 3) i\n ) sub;\n
\nBe sure not to decompose the composite result of the function in the subquery. Reserve that for the outer query.
\nRequires a well known type, of course. Would not work with anonymous records.
\nOr, what @Richard wrote, a LATERAL JOIN works, too. The syntax can be simpler:
\nSELECT * FROM generate_series(1, 3) i, slow_func(i) f\n
\n\nLATERAL is applied implicitly in Postgres 9.3 or later. \n- A function can stand on its own in the
FROM clause, doesn't have to be wrapped in an additional sub-select. Just imagine a table in its place. \n
\nSQL Fiddle with EXPLAIN VERBOSE output for all variants. You can see multiple evaluation of the function if it happens.
\nCOST setting
\nGenerally (should not matter for this particular query), make sure to apply a high cost setting to your function, so the planner knows to avoid evaluating more often then necessary. Like:
\nCREATE OR REPLACE FUNCTION slow_function(int)\n RETURNS result_t AS\n$func$\n -- expensive body\n$func$ LANGUAGE sql IMMUTABLE COST 100000;
\n\n\nLarger values cause the planner to try to avoid evaluating the function more often than necessary.
\n
\n
soup wrap:
A CTE is not even necessary. A plain subquery does the job as well (tested with pg 9.3):
SELECT i, (f).* -- decompose here
FROM (
SELECT i, (slow_func(i)) AS f -- do not decompose here
FROM generate_series(1, 3) i
) sub;
Be sure not to decompose the composite result of the function in the subquery. Reserve that for the outer query.
Requires a well known type, of course. Would not work with anonymous records.
Or, what @Richard wrote, a LATERAL JOIN works, too. The syntax can be simpler:
SELECT * FROM generate_series(1, 3) i, slow_func(i) f
LATERAL is applied implicitly in Postgres 9.3 or later.
- A function can stand on its own in the
FROM clause, doesn't have to be wrapped in an additional sub-select. Just imagine a table in its place.
SQL Fiddle with EXPLAIN VERBOSE output for all variants. You can see multiple evaluation of the function if it happens.
COST setting
Generally (should not matter for this particular query), make sure to apply a high cost setting to your function, so the planner knows to avoid evaluating more often then necessary. Like:
CREATE OR REPLACE FUNCTION slow_function(int)
RETURNS result_t AS
$func$
-- expensive body
$func$ LANGUAGE sql IMMUTABLE COST 100000;
Larger values cause the planner to try to avoid evaluating the function more often than necessary.
qid & accept id:
(24438529, 24438845)
query:
How can I find missing date range in sql server 2008?
soup:
There may be a simpler way to do this, but often when trying to find missing numbers/dates you need to create those numbers/dates then LEFT JOIN to your existing data to find what is missing. You can create the dates in question with a recursive cte:
\nWITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n UNION ALL\n SELECT DATEADD(DAY,1,dt)\n FROM cal\n WHERE dt < '2014-07-30')\nSELECT *\nFROM cal\n
\nThen, you LEFT JOIN to your table to get a list of missing dates:
\nWITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n UNION ALL\n SELECT DATEADD(DAY,1,dt)\n FROM cal\n WHERE dt < '2014-07-30')\nSELECT DISTINCT cal.dt \nFROM cal\nLEFT JOIN YourTable a\n ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\nWHERE a.SS_StartDate IS NULL\n
\nThen you need to find out whether or not consecutive rows belong in the same range, or if they have a gap between them, using DATEDIFF() and ROW_NUMBER():
\nWITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n UNION ALL\n SELECT DATEADD(DAY,1,dt)\n FROM cal\n WHERE dt < '2014-07-30')\n ,dt_list AS (SELECT DISTINCT cal.dt \n FROM cal\n LEFT JOIN YourTable a\n ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\n WHERE a.SS_StartDate IS NULL) \nSELECT dt\n ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range\nFROM dt_list\n
\nThen use MIN() and MAX() to get the ranges:
\nWITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n UNION ALL\n SELECT DATEADD(DAY,1,dt)\n FROM cal\n WHERE dt < '2014-07-30')\n ,dt_list AS (SELECT DISTINCT cal.dt \n FROM cal\n LEFT JOIN YourTable a\n ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\n WHERE a.SS_StartDate IS NULL) \n ,dt_range AS (SELECT dt\n ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range\n FROM dt_list)\nSELECT MIN(dt) AS BeginRange\n ,MAX(dt) AS EndRange\nFROM dt_range\nGROUP BY dt_range;\n--OPTION (MAXRECURSION 0)\n
\nDemo: SQL Fiddle
\nNote: If the range you're checking is more than 100 days you'll need to specify the MAXRECURSION, 0 means no limit.
\nNote2: If your SE dates are intended to drive the complete date range, then change the cal cte from fixed dates to queries using MIN() and MAX() respectively.
\n
soup wrap:
There may be a simpler way to do this, but often when trying to find missing numbers/dates you need to create those numbers/dates then LEFT JOIN to your existing data to find what is missing. You can create the dates in question with a recursive cte:
WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
UNION ALL
SELECT DATEADD(DAY,1,dt)
FROM cal
WHERE dt < '2014-07-30')
SELECT *
FROM cal
Then, you LEFT JOIN to your table to get a list of missing dates:
WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
UNION ALL
SELECT DATEADD(DAY,1,dt)
FROM cal
WHERE dt < '2014-07-30')
SELECT DISTINCT cal.dt
FROM cal
LEFT JOIN YourTable a
ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
WHERE a.SS_StartDate IS NULL
Then you need to find out whether or not consecutive rows belong in the same range, or if they have a gap between them, using DATEDIFF() and ROW_NUMBER():
WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
UNION ALL
SELECT DATEADD(DAY,1,dt)
FROM cal
WHERE dt < '2014-07-30')
,dt_list AS (SELECT DISTINCT cal.dt
FROM cal
LEFT JOIN YourTable a
ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
WHERE a.SS_StartDate IS NULL)
SELECT dt
,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range
FROM dt_list
Then use MIN() and MAX() to get the ranges:
WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
UNION ALL
SELECT DATEADD(DAY,1,dt)
FROM cal
WHERE dt < '2014-07-30')
,dt_list AS (SELECT DISTINCT cal.dt
FROM cal
LEFT JOIN YourTable a
ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
WHERE a.SS_StartDate IS NULL)
,dt_range AS (SELECT dt
,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range
FROM dt_list)
SELECT MIN(dt) AS BeginRange
,MAX(dt) AS EndRange
FROM dt_range
GROUP BY dt_range;
--OPTION (MAXRECURSION 0)
Demo: SQL Fiddle
Note: If the range you're checking is more than 100 days you'll need to specify the MAXRECURSION, 0 means no limit.
Note2: If your SE dates are intended to drive the complete date range, then change the cal cte from fixed dates to queries using MIN() and MAX() respectively.
qid & accept id:
(24497436, 24497698)
query:
Select values from different rows in a mysql join
soup:
Since each category (and by the way, you might want to rename either the table or the level so that "category" doesn't mean two different things) has a singular known parent, but an indeterminate number of unknown children, you need to "walk up" from the most specific (at depth = 2) to the most general category, performing a self-join on the category table for each additional value you want to insert.
\nIf you're impatient, skip to the SQL Fiddle link at the bottom of the post. If you'd rather be walked through it, continue reading - it's really not that different from any other case where you have a surrogate ID that you want to replace with data from the corresponding table.
\nYou could start by looking at all the information:
\nSELECT * FROM products AS P\n JOIN\n products_categories AS PC ON P.id = PC.product_id\n JOIN\n categories AS C ON PC.category_id = C.id\nWHERE P.id = 1 AN D C.depth = 2;\n\n+----+------------+------------+-------------+----+-----------+-------+---------+\n| id | name | product_id | category_id | id | parent_id | depth | name |\n+----+------------+------------+-------------+----+-----------+-------+---------+\n| 1 | Rad Widget | 1 | 3 | 3 | 2 | 2 | Widgets |\n+----+------------+------------+-------------+----+-----------+-------+---------+\n
\nFirst thing you have to do is recognize which information is useful and which is not. You don't want to be SELECT *-ing all day here. You have the first two columns you want, and the last column (recognize this as your "class"); you need parent_id to find the next column you want, and let's hold onto depth just for illustration. Forget the rest, they're clutter.
\nSo replace that * with specific column names, alias "class", and go after the data represented by parent_id. This information is stored in the category table - you might be thinking, but I already joined that table! Don't care; do it again, only give it a new alias. Remember that your ON condition is a bit different - the products_categories has done its job already, now you want the row that matches C.parent_id - and that you only need certain columns to find the next parent:
\nSELECT\n P.id,\n P.name,\n C1.parent_id,\n C1.depth,\n C1.name,\n C.name AS 'class'\nFROM\n products AS P\n JOIN\n products_categories AS PC ON P.id = PC.product_id\n JOIN\n categories AS C ON PC.category_id = C.id\n JOIN\n categories AS C1 ON C.parent_id = C1.id\nWHERE\n P.id = 1\n AND C.depth = 2;\n\n+----+------------+-----------+---------------+---------+\n| id | name | parent_id | name | class |\n+----+------------+-----------+---------------+---------+\n| 1 | Rad Widget | 1 | Miscellaneous | Widgets |\n+----+------------+-----------+---------------+---------+\n
\nRepeat the process one more time, aliasing the column you just added and using the new C1.parent_id in your next join condition:
\nSELECT\n P.id,\n P.name,\n PC.category_id,\n C2.parent_id,\n C2.depth,\n C2.name,\n C1.name AS 'category',\n C.name AS 'class'\nFROM\n products AS P\n JOIN\n products_categories AS PC ON P.id = PC.product_id\n JOIN\n categories AS C ON PC.category_id = C.id\n JOIN\n categories AS C1 ON C.parent_id = C1.id\n JOIN\n categories AS C2 ON C1.parent_id = C2.id\nWHERE\n P.id = 1\n AND C.depth = 2;\n\n+----+------------+-----------+-------+-------------+---------------+---------+\n| id | name | parent_id | depth | name | category | class |\n+----+------------+-----------+-------+-------------+---------------+---------+\n| 1 | Rad Widget | NULL | 0 | Electronics | Miscellaneous | Widgets |\n+----+------------+-----------+-------+-------------+---------------+---------+\n
\nNow we're clearly done; we can't join another copy on C2.parent_id = NULL and we also see that depth = 0, so all that's left to do is get rid of the columns we don't want to display and double check our aliases. Here it is in action on SQL Fiddle.
\n
soup wrap:
Since each category (and by the way, you might want to rename either the table or the level so that "category" doesn't mean two different things) has a singular known parent, but an indeterminate number of unknown children, you need to "walk up" from the most specific (at depth = 2) to the most general category, performing a self-join on the category table for each additional value you want to insert.
If you're impatient, skip to the SQL Fiddle link at the bottom of the post. If you'd rather be walked through it, continue reading - it's really not that different from any other case where you have a surrogate ID that you want to replace with data from the corresponding table.
You could start by looking at all the information:
SELECT * FROM products AS P
JOIN
products_categories AS PC ON P.id = PC.product_id
JOIN
categories AS C ON PC.category_id = C.id
WHERE P.id = 1 AN D C.depth = 2;
+----+------------+------------+-------------+----+-----------+-------+---------+
| id | name | product_id | category_id | id | parent_id | depth | name |
+----+------------+------------+-------------+----+-----------+-------+---------+
| 1 | Rad Widget | 1 | 3 | 3 | 2 | 2 | Widgets |
+----+------------+------------+-------------+----+-----------+-------+---------+
First thing you have to do is recognize which information is useful and which is not. You don't want to be SELECT *-ing all day here. You have the first two columns you want, and the last column (recognize this as your "class"); you need parent_id to find the next column you want, and let's hold onto depth just for illustration. Forget the rest, they're clutter.
So replace that * with specific column names, alias "class", and go after the data represented by parent_id. This information is stored in the category table - you might be thinking, but I already joined that table! Don't care; do it again, only give it a new alias. Remember that your ON condition is a bit different - the products_categories has done its job already, now you want the row that matches C.parent_id - and that you only need certain columns to find the next parent:
SELECT
P.id,
P.name,
C1.parent_id,
C1.depth,
C1.name,
C.name AS 'class'
FROM
products AS P
JOIN
products_categories AS PC ON P.id = PC.product_id
JOIN
categories AS C ON PC.category_id = C.id
JOIN
categories AS C1 ON C.parent_id = C1.id
WHERE
P.id = 1
AND C.depth = 2;
+----+------------+-----------+---------------+---------+
| id | name | parent_id | name | class |
+----+------------+-----------+---------------+---------+
| 1 | Rad Widget | 1 | Miscellaneous | Widgets |
+----+------------+-----------+---------------+---------+
Repeat the process one more time, aliasing the column you just added and using the new C1.parent_id in your next join condition:
SELECT
P.id,
P.name,
PC.category_id,
C2.parent_id,
C2.depth,
C2.name,
C1.name AS 'category',
C.name AS 'class'
FROM
products AS P
JOIN
products_categories AS PC ON P.id = PC.product_id
JOIN
categories AS C ON PC.category_id = C.id
JOIN
categories AS C1 ON C.parent_id = C1.id
JOIN
categories AS C2 ON C1.parent_id = C2.id
WHERE
P.id = 1
AND C.depth = 2;
+----+------------+-----------+-------+-------------+---------------+---------+
| id | name | parent_id | depth | name | category | class |
+----+------------+-----------+-------+-------------+---------------+---------+
| 1 | Rad Widget | NULL | 0 | Electronics | Miscellaneous | Widgets |
+----+------------+-----------+-------+-------------+---------------+---------+
Now we're clearly done; we can't join another copy on C2.parent_id = NULL and we also see that depth = 0, so all that's left to do is get rid of the columns we don't want to display and double check our aliases. Here it is in action on SQL Fiddle.
qid & accept id:
(24550681, 24551996)
query:
How to create new table where database's name begin with ...?
soup:
There is a better and cheaper way to do this. This is very very simple and works perfectly.\n
With SELECT INTO statement you can copy the structure of a table as well as data to another table in same or external databases.
\nReference:http://www.w3schools.com/sql/sql_select_into.asp
\nDECLARE @sql VARCHAR(8000)\nSET @sql=''\nSELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'\nSELECT @sql\nEXEC(@sql)\n
\nHere OrigialDB is the name of database where you have this table.
\n
\nIf your table in OrginalDB carries data and you don't want to copy data and need to copy only the structure then you may try this-
\nDECLARE @sql VARCHAR(8000) \n\nSET @sql=''\n SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2 WHERE 1<>1' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'\n SELECT @sql\n EXEC(@sql)\n
\nThis should work else let me know if I can help you.
\nNOTE: Constraints will not be copied
\n
soup wrap:
There is a better and cheaper way to do this. This is very very simple and works perfectly.
With SELECT INTO statement you can copy the structure of a table as well as data to another table in same or external databases.
Reference:http://www.w3schools.com/sql/sql_select_into.asp
DECLARE @sql VARCHAR(8000)
SET @sql=''
SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'
SELECT @sql
EXEC(@sql)
Here OrigialDB is the name of database where you have this table.
If your table in OrginalDB carries data and you don't want to copy data and need to copy only the structure then you may try this-
DECLARE @sql VARCHAR(8000)
SET @sql=''
SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2 WHERE 1<>1' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'
SELECT @sql
EXEC(@sql)
This should work else let me know if I can help you.
NOTE: Constraints will not be copied
qid & accept id:
(24610143, 24610562)
query:
How to create grouped daily,weekly and monthly reports including calculated fields in SQL Server
soup:
I'm not sure if I understood your question correctly, but this gives you all the users created per day:
\nSELECT year(userCreated), month(userCreated), day(userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), month(userCreated), day(userCreated)\n
\nthis one by month:
\nSELECT year(userCreated), month(userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), month(userCreated)\n
\nand this one by week:
\nSELECT year(userCreated), datepart(week, userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), datepart(week, userCreated)\n
\nEdit according to you the missing total field I give you here the example for the month query:
\nSELECT year(userCreated), month(userCreated), count(*) AS NewCount,\n(SELECT COUNT(*) FROM Users u2 WHERE \n CAST(CAST(year(u1.userCreated) AS VARCHAR(4)) + RIGHT('0' + CAST(month(u1.userCreated) AS VARCHAR(2)), 2) + '01' AS DATETIME) > u2.userCreated) AS TotalCount\nFROM Users u1\nGROUP BY year(userCreated), month(userCreated)\n
\nHope this helps for the other two queries.
\n
soup wrap:
I'm not sure if I understood your question correctly, but this gives you all the users created per day:
SELECT year(userCreated), month(userCreated), day(userCreated), count(*)
FROM Users
GROUP BY year(userCreated), month(userCreated), day(userCreated)
this one by month:
SELECT year(userCreated), month(userCreated), count(*)
FROM Users
GROUP BY year(userCreated), month(userCreated)
and this one by week:
SELECT year(userCreated), datepart(week, userCreated), count(*)
FROM Users
GROUP BY year(userCreated), datepart(week, userCreated)
Edit according to you the missing total field I give you here the example for the month query:
SELECT year(userCreated), month(userCreated), count(*) AS NewCount,
(SELECT COUNT(*) FROM Users u2 WHERE
CAST(CAST(year(u1.userCreated) AS VARCHAR(4)) + RIGHT('0' + CAST(month(u1.userCreated) AS VARCHAR(2)), 2) + '01' AS DATETIME) > u2.userCreated) AS TotalCount
FROM Users u1
GROUP BY year(userCreated), month(userCreated)
Hope this helps for the other two queries.
qid & accept id:
(24622282, 24622345)
query:
Select from MS Access Table between two dates?
soup:
Try CDate() to convert your string into a date.
\nselect * from audience \nwhere CDate(audate) between #01/06/2014# and #01/08/2014#;\n
\nIf it doesn't work because CDate does not reconize your format you can use DateSerial(year, month, day) to build a Date. You will need to use mid$ and Cint() to build the year, month and day arguments. Something like this for a format "yyyy-mm-dd":
\nDateSerial(CInt(mid(audate, 1, 4)), CInt(mid(audate, 6, 2)), CInt(mid(audate, 9, 2))\n
\nHope this helps.
\n
soup wrap:
Try CDate() to convert your string into a date.
select * from audience
where CDate(audate) between #01/06/2014# and #01/08/2014#;
If it doesn't work because CDate does not reconize your format you can use DateSerial(year, month, day) to build a Date. You will need to use mid$ and Cint() to build the year, month and day arguments. Something like this for a format "yyyy-mm-dd":
DateSerial(CInt(mid(audate, 1, 4)), CInt(mid(audate, 6, 2)), CInt(mid(audate, 9, 2))
Hope this helps.
qid & accept id:
(24633875, 24635063)
query:
Oracle: insert from type table
soup:
Assuming that you have something like
\nCREATE TYPE my_nested_table_type\n AS TABLE OF <>;\n\nDECLARE\n l_nt my_nested_table_type;\nBEGIN\n <>\n
\nthen the way to do a bulk insert of the data from the collection into a heap-organized table would be to use a FORALL
\nFORALL i in 1..l_nt.count\n INSERT INTO some_table( <> )\n VALUES( l_nt(i).col1, l_nt(i).col2, ... , l_nt(i).colN );\n
\n
soup wrap:
Assuming that you have something like
CREATE TYPE my_nested_table_type
AS TABLE OF <>;
DECLARE
l_nt my_nested_table_type;
BEGIN
<>
then the way to do a bulk insert of the data from the collection into a heap-organized table would be to use a FORALL
FORALL i in 1..l_nt.count
INSERT INTO some_table( <> )
VALUES( l_nt(i).col1, l_nt(i).col2, ... , l_nt(i).colN );
qid & accept id:
(24636896, 24637221)
query:
SQL sum of all unique values per date
soup:
How you combine values depends on the database. That is the only tricky part of a question that is otherwise basic SQL. Here is an example using the standard concat() function:
\nselect date, concat(event1, event2, event3) as comb_event, count(*)\nfrom example\ngroup by date, concat(event1, event2, event3)\norder by date, concat(event1, event2, event3);\n
\nDepending on the database, the syntax might be:
\nselect date, event1 || event2 || event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 || event2 || event3\norder by date, event1 || event2 || event3;\n
\nor:
\nselect date, event1 + event2 + event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 + event2 + event3\norder by date, event1 + event2 + event3;\n
\nor event:
\nselect date, event1 & event2 & event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 & event2 & event3\norder by date, event1 & event2 & event3;\n
\n
soup wrap:
How you combine values depends on the database. That is the only tricky part of a question that is otherwise basic SQL. Here is an example using the standard concat() function:
select date, concat(event1, event2, event3) as comb_event, count(*)
from example
group by date, concat(event1, event2, event3)
order by date, concat(event1, event2, event3);
Depending on the database, the syntax might be:
select date, event1 || event2 || event3 as comb_event, count(*)
from example
group by date, event1 || event2 || event3
order by date, event1 || event2 || event3;
or:
select date, event1 + event2 + event3 as comb_event, count(*)
from example
group by date, event1 + event2 + event3
order by date, event1 + event2 + event3;
or event:
select date, event1 & event2 & event3 as comb_event, count(*)
from example
group by date, event1 & event2 & event3
order by date, event1 & event2 & event3;
qid & accept id:
(24656842, 24659678)
query:
Revert CAST(0xABCD AS date)
soup:
SELECT CAST(CAST(0xABCD AS INT) AS DATETIME)\n
\n-- 2020-06-01 00:00:00.000
\nSELECT CAST(CAST(CAST('2020-06-01 00:00:00.000' AS DATETIME) AS INT) AS BINARY(2))\n
\n-- 0xABCD
\n
soup wrap:
SELECT CAST(CAST(0xABCD AS INT) AS DATETIME)
-- 2020-06-01 00:00:00.000
SELECT CAST(CAST(CAST('2020-06-01 00:00:00.000' AS DATETIME) AS INT) AS BINARY(2))
-- 0xABCD
qid & accept id:
(24657408, 24661152)
query:
Pass EXEC command as a variable into .sql
soup:
Turns out the issue was regarding the semicolon at the end of my %command% variable's value. I removed the semicolon from the value of the variable and added it to the end of the exec command in the .sql file. I also wrapped the %command% parameter pass in quotes because the variable contained spaces.
\nfile.sql
\nset serveroutput on\nvariable out_val varchar2;\nexec &1;\nprint out_val\nexit\n
\nmybatch.bat
\nset procedure=%1\nset param1=%2\nset param2=%3\nset strYN = ' '\nset command=%procedure%('%param1%', '%param2%', :out_val)\n\nrem ** This line stores out_val value Y or N as strYN.\nfor /F "usebackq" %%i in (`sqlplus database/pw@user @"file.sql" "%command%"`) do (\n set stryn=%%i\n if /I "!strYN!"=="N" (goto:nextN) else (if /I "!strYN!"=="Y" goto:nextY)\n)\n
\n
soup wrap:
Turns out the issue was regarding the semicolon at the end of my %command% variable's value. I removed the semicolon from the value of the variable and added it to the end of the exec command in the .sql file. I also wrapped the %command% parameter pass in quotes because the variable contained spaces.
file.sql
set serveroutput on
variable out_val varchar2;
exec &1;
print out_val
exit
mybatch.bat
set procedure=%1
set param1=%2
set param2=%3
set strYN = ' '
set command=%procedure%('%param1%', '%param2%', :out_val)
rem ** This line stores out_val value Y or N as strYN.
for /F "usebackq" %%i in (`sqlplus database/pw@user @"file.sql" "%command%"`) do (
set stryn=%%i
if /I "!strYN!"=="N" (goto:nextN) else (if /I "!strYN!"=="Y" goto:nextY)
)
qid & accept id:
(24660075, 25095318)
query:
Counting the number of hits for a given search query/term per document in Oracle
soup:
You can continue using CTX_DOC; the procedure HIGHLIGHT can be contorted slightly to do exactly what you're asking for.
\nUsing this environment:
\ncreate table docs ( id number, text clob, primary key (id) );\n\nTable created.\n\ninsert all\n into docs values (1, to_clob('a dog and a dog'))\n into docs values (2, to_clob('a dog and a cat'))\n into docs values (3, to_clob('just a cat'))\nselect * from dual;\n\n3 rows created.\n\ncreate index i_text_docs on docs(text) indextype is ctxsys.context;\n\nIndex created.\n
\nCTX_DOC.HIGHLIGHT has an OUT parameter of a HIGHLIGHT_TAB type, which contains the count of the number of hits within a document.
\ndeclare\n l_highlight ctx_doc.highlight_tab;\nbegin\n ctx_doc.set_key_type('PRIMARY_KEY');\n\n for i in ( select * from docs where contains(text, 'dog') > 0 ) loop\n ctx_doc.highlight('I_TEXT_DOCS', i.id, 'dog', l_highlight);\n dbms_output.put_line('id: ' || i.id || ' hits: ' || l_highlight.count);\n end loop;\n\nend;\n/\nid: 1 hits: 2\nid: 2 hits: 1\n\nPL/SQL procedure successfully completed.\n
\nObviously if you're doing this in a query then a procedure isn't the best thing in the world, but you can wrap it in a function if you want:
\ncreate or replace function docs_count (\n Pid in docs.id%type, Ptext in varchar2\n ) return integer is\n\n l_highlight ctx_doc.highlight_tab;\nbegin\n ctx_doc.set_key_type('PRIMARY_KEY');\n ctx_doc.highlight('I_TEXT_DOCS', Pid, Ptext, l_highlight);\n return l_highlight.count;\nend;\n
\nThis can then be called normally
\nselect id\n , to_char(text) as text\n , docs_count(id, 'dog') as dogs\n , docs_count(id, 'cat') as cats\n from docs;\n\n ID TEXT DOGS CATS\n---------- --------------- ---------- ----------\n 1 a dog and a dog 2 0\n 2 a dog and a cat 1 1\n 3 just a cat 0 1\n
\nIf possible, it might be simpler to replace the keywords as Gordon notes. I'd use DBMS_LOB.GETLENGTH() function instead of simply LENGTH() to avoid potential problems, but REPLACE() works on CLOBs so this won't be a problem. Something like the following (assuming we're still searching for dogs)
\nselect (dbms_lob.getlength(text) - dbms_lob.getlength(replace(text, 'dog')))\n / length('dog')\n from docs\n
\nIt's worth noting that string searching gets progressively slower as strings get larger (hence the need for text indexing) so while this performs fine on the tiny example given it might suffer from performance problems on larger documents.
\n
\nI've just seen your comment:
\n\n... but it would require me going through each document and doing a count of the hits which frankly is computationally expensive
\n
\nNo matter what you do you're going to have to go through each document. You want to find the exact number of instances of a string within another string and the only way to do this is to look through the entire string. (I would highly recommend reading Joel's post on strings; it makes a point about XML and relational databases but I think it fits nicely here too.) If you were looking for an estimate you could calculate the number of times a word appears in the first 100 characters and then average it out over the length of the LOB (crap algorithm I know), but you want to be accurate.
\nObviously we don't know how Oracle has implemented all their functions internally, but let's make some assumptions. To calculate the length of a string you need to literally count the number of bytes in it. This means iterating over the entire string. There are some algorithms to improve this, but they still involve iterating over the string. If you want to replace a string with another string, you have to iterate over the original string, looking for the string you want to replace.
\nTheoretically, depending on how Oracle's implemented everything, using CTX_DOC.HIGHLIGHT should be quicker than anything else as it only has to iterate over the original string once, looking for the string you want to find and storing the byte/character offset from the start of the original string.
\nThe suggestion length(replace(, )) - length( may have to iterate three separate times over the original string (or something that's close to it in length). I doubt that it would actually do this as everything can be cached and Oracle should be storing the byte length to make LENGTH() efficient. This is the reason I suggest using DBMS_LOB.GETLENGTH rather than just LENGTH(); Oracle's almost certainly storing the byte length of the document.
\nIf you don't want to parse the document each time you run your queries it might be worth doing a single run when loading/updating data and store, separately, the words and the number of occurrences per document.
\n
soup wrap:
You can continue using CTX_DOC; the procedure HIGHLIGHT can be contorted slightly to do exactly what you're asking for.
Using this environment:
create table docs ( id number, text clob, primary key (id) );
Table created.
insert all
into docs values (1, to_clob('a dog and a dog'))
into docs values (2, to_clob('a dog and a cat'))
into docs values (3, to_clob('just a cat'))
select * from dual;
3 rows created.
create index i_text_docs on docs(text) indextype is ctxsys.context;
Index created.
CTX_DOC.HIGHLIGHT has an OUT parameter of a HIGHLIGHT_TAB type, which contains the count of the number of hits within a document.
declare
l_highlight ctx_doc.highlight_tab;
begin
ctx_doc.set_key_type('PRIMARY_KEY');
for i in ( select * from docs where contains(text, 'dog') > 0 ) loop
ctx_doc.highlight('I_TEXT_DOCS', i.id, 'dog', l_highlight);
dbms_output.put_line('id: ' || i.id || ' hits: ' || l_highlight.count);
end loop;
end;
/
id: 1 hits: 2
id: 2 hits: 1
PL/SQL procedure successfully completed.
Obviously if you're doing this in a query then a procedure isn't the best thing in the world, but you can wrap it in a function if you want:
create or replace function docs_count (
Pid in docs.id%type, Ptext in varchar2
) return integer is
l_highlight ctx_doc.highlight_tab;
begin
ctx_doc.set_key_type('PRIMARY_KEY');
ctx_doc.highlight('I_TEXT_DOCS', Pid, Ptext, l_highlight);
return l_highlight.count;
end;
This can then be called normally
select id
, to_char(text) as text
, docs_count(id, 'dog') as dogs
, docs_count(id, 'cat') as cats
from docs;
ID TEXT DOGS CATS
---------- --------------- ---------- ----------
1 a dog and a dog 2 0
2 a dog and a cat 1 1
3 just a cat 0 1
If possible, it might be simpler to replace the keywords as Gordon notes. I'd use DBMS_LOB.GETLENGTH() function instead of simply LENGTH() to avoid potential problems, but REPLACE() works on CLOBs so this won't be a problem. Something like the following (assuming we're still searching for dogs)
select (dbms_lob.getlength(text) - dbms_lob.getlength(replace(text, 'dog')))
/ length('dog')
from docs
It's worth noting that string searching gets progressively slower as strings get larger (hence the need for text indexing) so while this performs fine on the tiny example given it might suffer from performance problems on larger documents.
I've just seen your comment:
... but it would require me going through each document and doing a count of the hits which frankly is computationally expensive
No matter what you do you're going to have to go through each document. You want to find the exact number of instances of a string within another string and the only way to do this is to look through the entire string. (I would highly recommend reading Joel's post on strings; it makes a point about XML and relational databases but I think it fits nicely here too.) If you were looking for an estimate you could calculate the number of times a word appears in the first 100 characters and then average it out over the length of the LOB (crap algorithm I know), but you want to be accurate.
Obviously we don't know how Oracle has implemented all their functions internally, but let's make some assumptions. To calculate the length of a string you need to literally count the number of bytes in it. This means iterating over the entire string. There are some algorithms to improve this, but they still involve iterating over the string. If you want to replace a string with another string, you have to iterate over the original string, looking for the string you want to replace.
Theoretically, depending on how Oracle's implemented everything, using CTX_DOC.HIGHLIGHT should be quicker than anything else as it only has to iterate over the original string once, looking for the string you want to find and storing the byte/character offset from the start of the original string.
The suggestion length(replace(, )) - length( may have to iterate three separate times over the original string (or something that's close to it in length). I doubt that it would actually do this as everything can be cached and Oracle should be storing the byte length to make LENGTH() efficient. This is the reason I suggest using DBMS_LOB.GETLENGTH rather than just LENGTH(); Oracle's almost certainly storing the byte length of the document.
If you don't want to parse the document each time you run your queries it might be worth doing a single run when loading/updating data and store, separately, the words and the number of occurrences per document.
qid & accept id:
(24669926, 24684247)
query:
Use SQL to remove duplicates from a type 2 slowly changing dimension
soup:
The following query, containing multiple CTE's compresses the date ranges of the updates and removes duplicate values.
\n1 First ranks are assigned within each id group, based on the RowStartDate.
\n2 Next, the maximum rank (next_rank_no) of the range of ranks which has the same value for NAME is determined. Thus, for the example data, row 1 of id=5 would have next_rank_no=5 and row 2 of id=4 would have next_rank_no=3. This version only handles the NAME column. If you want to handle additional columns, they must be included in the condition as well. For example, if you want to include a LOCATION column, then the join conditions would read as:
\n left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name and sv2.location = sv1.location\n left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and (sv3.name <> sv1.name or sv3.location <> sv1.location)\n
\n3 Finally, the first row for each id is selected. Then, the row corresponding to the next_rank_no is selected in a recursive fashion.
\nwith sorted_versions as --ranks are assigned within each id group\n(\n select \n v1.id,\n v1.name,\n v1.RowStartDate,\n v1.RowEndDate,\n rank() over (partition by v1.id order by v1.RowStartDate) rank_no\n from versions v1\n left join versions v2 on (v1.id = v2.id and v2.RowStartDate = v1.RowEndDate)\n),\nnext_rank as --the maximum rank of the range of ranks which has the same value for NAME\n(\n select \n sv1.id id, sv1.rank_no rank_no, COALESCE(min(sv3.rank_no)-1 , COALESCE(max(sv2.rank_no), sv1.rank_no)) next_rank_no\n from sorted_versions sv1\n left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name\n left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and sv3.name <> sv1.name\n group by sv1.id, sv1.rank_no\n),\nversions_cte as --the rowenddate of the "maximum rank" is selected \n(\n select sv.id, sv.name, sv.rowstartdate, sv3.rowenddate, nr.next_rank_no rank_no\n from sorted_versions sv\n inner join next_rank nr on sv.id = nr.id and sv.rank_no = nr.rank_no and sv.rank_no = 1\n inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no \n union all\n select\n sv2.id,\n sv2.name, \n sv2.rowstartdate,\n sv3.rowenddate,\n nr.next_rank_no\n from versions_cte vc\n inner join sorted_versions sv2 on sv2.id = vc.id and sv2.rank_no = vc.rank_no + 1\n inner join next_rank nr on sv2.id = nr.id and sv2.rank_no = nr.rank_no \n inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no\n)\nselect id, name, rowstartdate, rowenddate\nfrom versions_cte\norder by id, rowstartdate;\n
\n\n
soup wrap:
The following query, containing multiple CTE's compresses the date ranges of the updates and removes duplicate values.
1 First ranks are assigned within each id group, based on the RowStartDate.
2 Next, the maximum rank (next_rank_no) of the range of ranks which has the same value for NAME is determined. Thus, for the example data, row 1 of id=5 would have next_rank_no=5 and row 2 of id=4 would have next_rank_no=3. This version only handles the NAME column. If you want to handle additional columns, they must be included in the condition as well. For example, if you want to include a LOCATION column, then the join conditions would read as:
left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name and sv2.location = sv1.location
left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and (sv3.name <> sv1.name or sv3.location <> sv1.location)
3 Finally, the first row for each id is selected. Then, the row corresponding to the next_rank_no is selected in a recursive fashion.
with sorted_versions as --ranks are assigned within each id group
(
select
v1.id,
v1.name,
v1.RowStartDate,
v1.RowEndDate,
rank() over (partition by v1.id order by v1.RowStartDate) rank_no
from versions v1
left join versions v2 on (v1.id = v2.id and v2.RowStartDate = v1.RowEndDate)
),
next_rank as --the maximum rank of the range of ranks which has the same value for NAME
(
select
sv1.id id, sv1.rank_no rank_no, COALESCE(min(sv3.rank_no)-1 , COALESCE(max(sv2.rank_no), sv1.rank_no)) next_rank_no
from sorted_versions sv1
left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name
left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and sv3.name <> sv1.name
group by sv1.id, sv1.rank_no
),
versions_cte as --the rowenddate of the "maximum rank" is selected
(
select sv.id, sv.name, sv.rowstartdate, sv3.rowenddate, nr.next_rank_no rank_no
from sorted_versions sv
inner join next_rank nr on sv.id = nr.id and sv.rank_no = nr.rank_no and sv.rank_no = 1
inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no
union all
select
sv2.id,
sv2.name,
sv2.rowstartdate,
sv3.rowenddate,
nr.next_rank_no
from versions_cte vc
inner join sorted_versions sv2 on sv2.id = vc.id and sv2.rank_no = vc.rank_no + 1
inner join next_rank nr on sv2.id = nr.id and sv2.rank_no = nr.rank_no
inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no
)
select id, name, rowstartdate, rowenddate
from versions_cte
order by id, rowstartdate;
qid & accept id:
(24707125, 24707293)
query:
How to merge two SQL rows (same item ID) with the SUM() qty but show only the last row's info?
soup:
You're correct in thinking of partition by; though you'll also need to use a join (or an inline SQL in the results). Simplified example below:
\nselect firstRow.id\n, firstRow.upc\n, firstRow.name\n, sum(d.value) TotalUPCValue\nfrom (\n select id, upc, name\n , row_number() over (partition by upc order by id) r\n from demo\n) firstRow\ninner join demo d on d.upc = firstRow.upc\nwhere firstRow.r = 1\ngroup by firstRow.id\n, firstRow.upc\n, firstRow.name\n
\nWorking copy with table definition on SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/1
\nHere's the alternate version which doesn't use a join:
\nselect id\n, upc\n, name\n, (select sum(d.value) from demo d where d.upc = firstRow.upc) TotalUPCValue\nfrom (\n select id, upc, name\n , row_number() over (partition by upc order by id) r\n from demo\n) firstRow\nwhere firstRow.r = 1\n
\nSQL Fiddle: http://sqlfiddle.com/#!6/6bfee/2
\nThe first (join) method should typically be faster, but it's worth comparing against your data to confirm that.
\nUPDATE
\nThanks to @AndriyM for improving my second version:
\nselect id\n, upc\n, name\n, TotalUPCValue\nfrom (\n select id, upc, name\n , row_number() over (partition by upc order by id) r\n , sum(value) over (partition by upc) as TotalUPCValue\n from demo\n) firstRow\nwhere firstRow.r = 1\n;\n
\nSQL Fiddle: http://sqlfiddle.com/#!6/6bfee/7
\n
soup wrap:
You're correct in thinking of partition by; though you'll also need to use a join (or an inline SQL in the results). Simplified example below:
select firstRow.id
, firstRow.upc
, firstRow.name
, sum(d.value) TotalUPCValue
from (
select id, upc, name
, row_number() over (partition by upc order by id) r
from demo
) firstRow
inner join demo d on d.upc = firstRow.upc
where firstRow.r = 1
group by firstRow.id
, firstRow.upc
, firstRow.name
Working copy with table definition on SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/1
Here's the alternate version which doesn't use a join:
select id
, upc
, name
, (select sum(d.value) from demo d where d.upc = firstRow.upc) TotalUPCValue
from (
select id, upc, name
, row_number() over (partition by upc order by id) r
from demo
) firstRow
where firstRow.r = 1
SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/2
The first (join) method should typically be faster, but it's worth comparing against your data to confirm that.
UPDATE
Thanks to @AndriyM for improving my second version:
select id
, upc
, name
, TotalUPCValue
from (
select id, upc, name
, row_number() over (partition by upc order by id) r
, sum(value) over (partition by upc) as TotalUPCValue
from demo
) firstRow
where firstRow.r = 1
;
SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/7
qid & accept id:
(24726688, 24726929)
query:
Use SSIS To Copy A Table's Structure And Data With A Different Name
soup:
I guess you can make use of Execute Sql Task for this and simply execute the following statements inside your task.
\nInstead of Drop and Create simply Truncate table, as dropping table means you will have to give permission to users if you have some sort of restrictions and only some specific user who can access the data.
\nWithout Dropping the table
\nTRUNCATE TABLE Test_myTable;\nGO\n\nINSERT INTO Test_myTable (Col1, Col2, Col3, .....)\nSELECT Col1, Col2, Col3, .....\nFROM myTable\nGO\n
\nDrop Table and Create
\nIf for some reason you have to drop table and re-create it again you could execute the following statements inside your execute sql task.
\n--Drop tables if exists\n\nIF OBJECT_ID('dbo.Test_myTable', 'U') IS NOT NULL\n DROP TABLE dbo.Test_myTable\nGO\n\n--Create and populate table\nSELECT Col1, Col2, Col3, .....\nINTO dbo.Test_myTable\nFROM myTable\nGO\n
\n
soup wrap:
I guess you can make use of Execute Sql Task for this and simply execute the following statements inside your task.
Instead of Drop and Create simply Truncate table, as dropping table means you will have to give permission to users if you have some sort of restrictions and only some specific user who can access the data.
Without Dropping the table
TRUNCATE TABLE Test_myTable;
GO
INSERT INTO Test_myTable (Col1, Col2, Col3, .....)
SELECT Col1, Col2, Col3, .....
FROM myTable
GO
Drop Table and Create
If for some reason you have to drop table and re-create it again you could execute the following statements inside your execute sql task.
--Drop tables if exists
IF OBJECT_ID('dbo.Test_myTable', 'U') IS NOT NULL
DROP TABLE dbo.Test_myTable
GO
--Create and populate table
SELECT Col1, Col2, Col3, .....
INTO dbo.Test_myTable
FROM myTable
GO
qid & accept id:
(24779447, 26926830)
query:
MySQL - Return group of last inserted ID's
soup:
Well, it turns out that MySQL is... painful to work with, however if anyone wants a solution here it is:
\nYou need to create a cursor, and set its value to last_insert_id().\nFor example:
\n declare last_insert_pk int;\n declare last_insert2_pk int;\n
\nThen, in the cursor, you set the last inserted pk(s) for that iteration:
\n set last_insert_pk = last_insert_id();\n -- ...some stuff...\n set _insert2_pk = last_insert_id();\n
\nI had to use 8 different primary keys in a giant relation table, however it worked really well. There may be a better way, but this is understandable and repeatable.
\nGood luck!
\n
soup wrap:
Well, it turns out that MySQL is... painful to work with, however if anyone wants a solution here it is:
You need to create a cursor, and set its value to last_insert_id().
For example:
declare last_insert_pk int;
declare last_insert2_pk int;
Then, in the cursor, you set the last inserted pk(s) for that iteration:
set last_insert_pk = last_insert_id();
-- ...some stuff...
set _insert2_pk = last_insert_id();
I had to use 8 different primary keys in a giant relation table, however it worked really well. There may be a better way, but this is understandable and repeatable.
Good luck!
qid & accept id:
(24794466, 24794756)
query:
Database Schema Design: Tracking User Balance with concurrency
soup:
Relying on calculating an account balance every time you go to insert a new transaction is not a very good design - for one thing, as time goes by it will take longer and longer, as more and more rows appear in the transaction table.
\nA better idea is to store the current balance in another table - either a new table, or in the existing users table that you are already using as a foreign key reference.
\nIt could look like this:
\nCREATE TABLE users (\n user_id INT PRIMARY KEY,\n balance BIGINT NOT NULL DEFAULT 0 CHECK(balance>=0)\n);\n
\nThen, whenever you add a transaction, you update the balance like this:
\nUPDATE user SET balance=balance+$1 WHERE user_id=$2;\n
\nYou must do this inside a transaction, in which you also insert the transaction record.
\nConcurrency issues are taken care of automatically: if you attempt to update the same record twice from two different transactions, then the second one will be blocked until the first one commits or rolls back. The default transaction isolation level of 'Read Committed' ensures this - see the manual section on concurrency.
\nYou can issue the whole sequence from your application, or if you prefer you can add a trigger to the user_transaction table such that whenever a record is inserted into the user_transaction table, the balance is updated automatically.
\nThat way, the CHECK clause ensures that no transactions can be entered into the database that would cause the balance to go below 0.
\n
soup wrap:
Relying on calculating an account balance every time you go to insert a new transaction is not a very good design - for one thing, as time goes by it will take longer and longer, as more and more rows appear in the transaction table.
A better idea is to store the current balance in another table - either a new table, or in the existing users table that you are already using as a foreign key reference.
It could look like this:
CREATE TABLE users (
user_id INT PRIMARY KEY,
balance BIGINT NOT NULL DEFAULT 0 CHECK(balance>=0)
);
Then, whenever you add a transaction, you update the balance like this:
UPDATE user SET balance=balance+$1 WHERE user_id=$2;
You must do this inside a transaction, in which you also insert the transaction record.
Concurrency issues are taken care of automatically: if you attempt to update the same record twice from two different transactions, then the second one will be blocked until the first one commits or rolls back. The default transaction isolation level of 'Read Committed' ensures this - see the manual section on concurrency.
You can issue the whole sequence from your application, or if you prefer you can add a trigger to the user_transaction table such that whenever a record is inserted into the user_transaction table, the balance is updated automatically.
That way, the CHECK clause ensures that no transactions can be entered into the database that would cause the balance to go below 0.
qid & accept id:
(24795288, 24800471)
query:
INSERT interpolated rows into existing table
soup:
Possible to do. Have a sub query that gets the max reported time for each order id / stock id and join that against the orders table where the stock id is the same and the latest time is less that the order time. This gets you all the report times for that stock id that are greater than the latest time for that stock id and order id.
\nUse MIN to get the lowest reported time. Convert the 2 times to seconds, add them together and divide by 2, then convert back from seconds to a time.
\nSomething like this:-
\nSELECT orderid, stockid, 0, SEC_TO_TIME((TIME_TO_SEC(next_poss_order_report) + TIME_TO_SEC(last_order_report)) / 2)\nFROM\n(\n SELECT a.orderid, a.stockid, last_order_report, MIN(b.reported) next_poss_order_report\n FROM \n (\n SELECT orderid, stockid, MAX(reported) last_order_report\n FROM orders_table\n GROUP BY orderid, stockid\n ) a\n INNER JOIN orders_table b\n ON a.stockid = b.stockid\n AND a.last_order_report < b.reported\n GROUP BY a.orderid, a.stockid, a.last_order_report\n) sub0;\n
\nSQL fiddle here:-
\nhttp://www.sqlfiddle.com/#!2/cf129/17
\nPossible to simplify this a bit to:-
\nSELECT a.orderid, a.stockid, 0, SEC_TO_TIME((TIME_TO_SEC(MIN(b.reported)) + TIME_TO_SEC(last_order_report)) / 2)\nFROM \n(\n SELECT orderid, stockid, MAX(reported) last_order_report\n FROM orders_table\n GROUP BY orderid, stockid\n) a\nINNER JOIN orders_table b\nON a.stockid = b.stockid\nAND a.last_order_report < b.reported\nGROUP BY a.orderid, a.stockid, a.last_order_report;\n
\nThese queries might take a while, but are probably more efficient than running many queries from scripted code.
\n
soup wrap:
Possible to do. Have a sub query that gets the max reported time for each order id / stock id and join that against the orders table where the stock id is the same and the latest time is less that the order time. This gets you all the report times for that stock id that are greater than the latest time for that stock id and order id.
Use MIN to get the lowest reported time. Convert the 2 times to seconds, add them together and divide by 2, then convert back from seconds to a time.
Something like this:-
SELECT orderid, stockid, 0, SEC_TO_TIME((TIME_TO_SEC(next_poss_order_report) + TIME_TO_SEC(last_order_report)) / 2)
FROM
(
SELECT a.orderid, a.stockid, last_order_report, MIN(b.reported) next_poss_order_report
FROM
(
SELECT orderid, stockid, MAX(reported) last_order_report
FROM orders_table
GROUP BY orderid, stockid
) a
INNER JOIN orders_table b
ON a.stockid = b.stockid
AND a.last_order_report < b.reported
GROUP BY a.orderid, a.stockid, a.last_order_report
) sub0;
SQL fiddle here:-
http://www.sqlfiddle.com/#!2/cf129/17
Possible to simplify this a bit to:-
SELECT a.orderid, a.stockid, 0, SEC_TO_TIME((TIME_TO_SEC(MIN(b.reported)) + TIME_TO_SEC(last_order_report)) / 2)
FROM
(
SELECT orderid, stockid, MAX(reported) last_order_report
FROM orders_table
GROUP BY orderid, stockid
) a
INNER JOIN orders_table b
ON a.stockid = b.stockid
AND a.last_order_report < b.reported
GROUP BY a.orderid, a.stockid, a.last_order_report;
These queries might take a while, but are probably more efficient than running many queries from scripted code.
qid & accept id:
(24833153, 24833766)
query:
How to store messages with multiple recipients in PostgreSQL?
soup:
You need:
\n\na table for the users of your app, with the usual columns (unique id, name, etc.),
\na table for messages, with also a unique id, and a column to indicate which message it replies to; this will let you build threading
\na third table which constitutes the many to many relationship, with a foreign key on the user table and a foreign key on the message table,
\n
\nGetting all the recipients for a given message, or all the messages for a given recipient is just doing a couple of inner joins between all three tables and the proper where clause.
\nFor threading, you will need a recursive common table expression, which let you follow up the links between rows in the message table.
\nSomething like:
\nWITH RECURSIVE threads AS (\n SELECT id, parent_id, id AS root_id, body\n FROM messages\n WHERE parent_id IS NULL\n UNION ALL\n SELECT msg.id AS id , msg.parent_id AS parent_id, msgp.root_id AS root_id, msg.body AS body\n FROM messages AS msg\n INNER JOIN threads AS msgp\n ON (msg.parent_id = msgp.id)\n)\nSELECT *\nFROM threads\nWHERE root_id = :root;\n
\nWhere the column root_id contains the row id at the origin of the thread of the current row, will let you select a single thread whose root_id is set by the parameter :root.
\nWith multiple recipients, you need to do the inner joins on threads:
\nWITH ...\n)\nSELECT *\nFROM threads\nINNER JOIN threads_users tu\nON threads.id = tu.msg_id\nINNER JOIN users\nON users.id = tu.user_id\nWHERE root_id=:root\n
\n
soup wrap:
You need:
a table for the users of your app, with the usual columns (unique id, name, etc.),
a table for messages, with also a unique id, and a column to indicate which message it replies to; this will let you build threading
a third table which constitutes the many to many relationship, with a foreign key on the user table and a foreign key on the message table,
Getting all the recipients for a given message, or all the messages for a given recipient is just doing a couple of inner joins between all three tables and the proper where clause.
For threading, you will need a recursive common table expression, which let you follow up the links between rows in the message table.
Something like:
WITH RECURSIVE threads AS (
SELECT id, parent_id, id AS root_id, body
FROM messages
WHERE parent_id IS NULL
UNION ALL
SELECT msg.id AS id , msg.parent_id AS parent_id, msgp.root_id AS root_id, msg.body AS body
FROM messages AS msg
INNER JOIN threads AS msgp
ON (msg.parent_id = msgp.id)
)
SELECT *
FROM threads
WHERE root_id = :root;
Where the column root_id contains the row id at the origin of the thread of the current row, will let you select a single thread whose root_id is set by the parameter :root.
With multiple recipients, you need to do the inner joins on threads:
WITH ...
)
SELECT *
FROM threads
INNER JOIN threads_users tu
ON threads.id = tu.msg_id
INNER JOIN users
ON users.id = tu.user_id
WHERE root_id=:root
qid & accept id:
(24848880, 24849369)
query:
oracle month to day
soup:
As one of the approaches, you can turn a month into a list of days(dates) that constitute it (ease filtering operation), and perform calculation as follows:
\n/* sample of data that you've provided */\nwith t1(mnth,val) as(\n select 1, 93 from dual union all\n select 2, 56 from dual union all\n select 3, 186 from dual union all\n select 4, 60 from dual\n), \n/*\n Generates current year dates \n From January 1st 2014 to December 31st 2014 \n */\ndates(dt) as(\n select trunc(sysdate, 'YEAR') - 1 + level\n from dual\n connect by extract(year from (trunc(sysdate, 'YEAR') - 1 + level)) <= \n extract(year from sysdate)\n)\n/* \n The query that performs calculations based on range of dates \n */\nselect sum(val / extract(day from last_day(dt))) as result\n from dates d\n join t1\n on (extract(month from d.dt) = t1.mnth)\n where dt between date '2014-01-17' and -- January 17th 2014 to \n date '2014-03-31' -- March 31st 2014\n
\nResult:
\n RESULT\n----------\n 287 \n
\n
soup wrap:
As one of the approaches, you can turn a month into a list of days(dates) that constitute it (ease filtering operation), and perform calculation as follows:
/* sample of data that you've provided */
with t1(mnth,val) as(
select 1, 93 from dual union all
select 2, 56 from dual union all
select 3, 186 from dual union all
select 4, 60 from dual
),
/*
Generates current year dates
From January 1st 2014 to December 31st 2014
*/
dates(dt) as(
select trunc(sysdate, 'YEAR') - 1 + level
from dual
connect by extract(year from (trunc(sysdate, 'YEAR') - 1 + level)) <=
extract(year from sysdate)
)
/*
The query that performs calculations based on range of dates
*/
select sum(val / extract(day from last_day(dt))) as result
from dates d
join t1
on (extract(month from d.dt) = t1.mnth)
where dt between date '2014-01-17' and -- January 17th 2014 to
date '2014-03-31' -- March 31st 2014
Result:
RESULT
----------
287
qid & accept id:
(24855053, 24856282)
query:
Concatenating row values using Inner Join
soup:
You can do what you want by pre-aggregating the table before the join. If there are only two values and you don't care about the order, then this will work:
\nDECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n SET Error = 1,\n Msg = (COALESCE(@DocHoldReasons + ': ', '') + minv +\n (case when minv <> maxv then ': ' + maxv else '' end)\n )\n FROM EpnPackages p INNER JOIN\n (select cv.CountyId, min(cv.value) as minv, max(cv.value) as maxv\n from EpnCountyValues cv\n where cv.ValueName = 'DocHoldReason'\n ) cv\n ON cv.CountyId = p.CountyId\n WHERE p.Status = 1000 AND p.Error = 0;\n
\nEDIT:
\nFor more than two values, you have to do string concatenation. That is "unpleasant" in SQL Server. Here is the approach:
\nDECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n SET Error = 1,\n Msg = (COALESCE(@DocHoldReasons + ': ', '') + \n stuff((select ': ' + cv.value\n from EpnCountyValues cv\n where cv.ValueName = 'DocHoldReason' and\n cv.CountyId = p.CountyId\n for xml path ('')\n ), 1, 2, '')\n )\n WHERE p.Status = 1000 AND p.Error = 0;\n
\nThis version does it using a correlated subquery rather than a join with an aggregation.
\nEDIT II:
\nYou can fix this with an additional coalesce:
\nDECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n SET Error = 1,\n Msg = (COALESCE(@DocHoldReasons + ': ', '') + \n COALESCE(stuff((select ': ' + cv.value\n from EpnCountyValues cv\n where cv.ValueName = 'DocHoldReason' and\n cv.CountyId = p.CountyId\n for xml path ('')\n ), 1, 2, ''), '')\n )\n WHERE p.Status = 1000 AND p.Error = 0;\n
\n
soup wrap:
You can do what you want by pre-aggregating the table before the join. If there are only two values and you don't care about the order, then this will work:
DECLARE @DocHoldReasons VARCHAR(8000);
SET @DocHoldReasons = 'DocType Hold';
UPDATE dbo.EpnPackages
SET Error = 1,
Msg = (COALESCE(@DocHoldReasons + ': ', '') + minv +
(case when minv <> maxv then ': ' + maxv else '' end)
)
FROM EpnPackages p INNER JOIN
(select cv.CountyId, min(cv.value) as minv, max(cv.value) as maxv
from EpnCountyValues cv
where cv.ValueName = 'DocHoldReason'
) cv
ON cv.CountyId = p.CountyId
WHERE p.Status = 1000 AND p.Error = 0;
EDIT:
For more than two values, you have to do string concatenation. That is "unpleasant" in SQL Server. Here is the approach:
DECLARE @DocHoldReasons VARCHAR(8000);
SET @DocHoldReasons = 'DocType Hold';
UPDATE dbo.EpnPackages
SET Error = 1,
Msg = (COALESCE(@DocHoldReasons + ': ', '') +
stuff((select ': ' + cv.value
from EpnCountyValues cv
where cv.ValueName = 'DocHoldReason' and
cv.CountyId = p.CountyId
for xml path ('')
), 1, 2, '')
)
WHERE p.Status = 1000 AND p.Error = 0;
This version does it using a correlated subquery rather than a join with an aggregation.
EDIT II:
You can fix this with an additional coalesce:
DECLARE @DocHoldReasons VARCHAR(8000);
SET @DocHoldReasons = 'DocType Hold';
UPDATE dbo.EpnPackages
SET Error = 1,
Msg = (COALESCE(@DocHoldReasons + ': ', '') +
COALESCE(stuff((select ': ' + cv.value
from EpnCountyValues cv
where cv.ValueName = 'DocHoldReason' and
cv.CountyId = p.CountyId
for xml path ('')
), 1, 2, ''), '')
)
WHERE p.Status = 1000 AND p.Error = 0;
qid & accept id:
(24910861, 24914798)
query:
Restrict foreign key relationship to rows of related subtypes
soup:
Simplify building on MATCH SIMPLE behavior of fk constraints
\nIf at least one column of multicolumn foreign constraint with default MATCH SIMPLE behaviour is NULL, the constraint is not enforced. You can build on that to largely simplify your design.
\nCREATE SCHEMA test;\n\nCREATE TABLE test.status(\n status_id integer PRIMARY KEY\n ,sub bool NOT NULL DEFAULT FALSE -- TRUE .. *can* be sub-status\n ,UNIQUE (sub, status_id)\n);\n\nCREATE TABLE test.entity(\n entity_id integer PRIMARY KEY\n ,status_id integer REFERENCES test.status -- can reference all statuses\n ,sub bool -- see examples below\n ,additional_col1 text -- should be NULL for main entities\n ,additional_col2 text -- should be NULL for main entities\n ,FOREIGN KEY (sub, status_id) REFERENCES test.status(sub, status_id)\n MATCH SIMPLE ON UPDATE CASCADE -- optionally enforce sub-status\n);\n
\nIt is very cheap to store some additional NULL columns (for main entities):
\n\nBTW, per documentation:
\n\nIf the refcolumn list is omitted, the primary key of the reftable is used.
\n
\nDemo-data:
\nINSERT INTO test.status VALUES\n (1, TRUE)\n, (2, TRUE)\n, (3, FALSE); -- not valid for sub-entities\n\nINSERT INTO test.entity(entity_id, status_id, sub) VALUES\n (11, 1, TRUE) -- sub-entity (can be main, UPDATES to status.sub cascaded)\n, (13, 3, FALSE) -- entity (cannot be sub, UPDATES to status.sub cascaded)\n, (14, 2, NULL) -- entity (can be sub, UPDATES to status.sub NOT cascaded)\n, (15, 3, NULL) -- entity (cannot be sub, UPDATES to status.sub NOT cascaded)\n
\nSQL Fiddle (including your tests).
\nAlternative with single FK
\nAnother option would be to enter all combinations of (status_id, sub) into the status table (there can only be 2 per status_id) and only have a single fk constraint:
\nCREATE TABLE test.status(\n status_id integer\n ,sub bool DEFAULT FALSE\n ,PRIMARY KEY (status_id, sub)\n);\n\nCREATE TABLE test.entity(\n entity_id integer PRIMARY KEY\n ,status_id integer NOT NULL -- cannot be NULL in this case\n ,sub bool NOT NULL -- cannot be NULL in this case\n ,additional_col1 text\n ,additional_col2 text\n ,FOREIGN KEY (status_id, sub) REFERENCES test.status\n MATCH SIMPLE ON UPDATE CASCADE -- optionally enforce sub-status\n);\n\nINSERT INTO test.status VALUES\n (1, TRUE) -- can be sub ...\n (1, FALSE) -- ... and main\n, (2, TRUE)\n, (2, FALSE)\n, (3, FALSE); -- only main\n
\nEtc.
\nRelated answers:
\n\n- MATCH FULL vs MATCH SIMPLE
\n- Two-column foreign key constraint only when third column is NOT NULL
\n- Uniqueness validation in database when validation has a condition on another table
\n
\nKeep all tables
\nIf you need all four tables for some reason not in the question consider this detailed solution to a very similar question on dba.SE:
\n\nInheritance
\n... might be another option for what you describe. If you can live with some major limitations. Related answer:
\n\n
soup wrap:
Simplify building on MATCH SIMPLE behavior of fk constraints
If at least one column of multicolumn foreign constraint with default MATCH SIMPLE behaviour is NULL, the constraint is not enforced. You can build on that to largely simplify your design.
CREATE SCHEMA test;
CREATE TABLE test.status(
status_id integer PRIMARY KEY
,sub bool NOT NULL DEFAULT FALSE -- TRUE .. *can* be sub-status
,UNIQUE (sub, status_id)
);
CREATE TABLE test.entity(
entity_id integer PRIMARY KEY
,status_id integer REFERENCES test.status -- can reference all statuses
,sub bool -- see examples below
,additional_col1 text -- should be NULL for main entities
,additional_col2 text -- should be NULL for main entities
,FOREIGN KEY (sub, status_id) REFERENCES test.status(sub, status_id)
MATCH SIMPLE ON UPDATE CASCADE -- optionally enforce sub-status
);
It is very cheap to store some additional NULL columns (for main entities):
BTW, per documentation:
If the refcolumn list is omitted, the primary key of the reftable is used.
Demo-data:
INSERT INTO test.status VALUES
(1, TRUE)
, (2, TRUE)
, (3, FALSE); -- not valid for sub-entities
INSERT INTO test.entity(entity_id, status_id, sub) VALUES
(11, 1, TRUE) -- sub-entity (can be main, UPDATES to status.sub cascaded)
, (13, 3, FALSE) -- entity (cannot be sub, UPDATES to status.sub cascaded)
, (14, 2, NULL) -- entity (can be sub, UPDATES to status.sub NOT cascaded)
, (15, 3, NULL) -- entity (cannot be sub, UPDATES to status.sub NOT cascaded)
SQL Fiddle (including your tests).
Alternative with single FK
Another option would be to enter all combinations of (status_id, sub) into the status table (there can only be 2 per status_id) and only have a single fk constraint:
CREATE TABLE test.status(
status_id integer
,sub bool DEFAULT FALSE
,PRIMARY KEY (status_id, sub)
);
CREATE TABLE test.entity(
entity_id integer PRIMARY KEY
,status_id integer NOT NULL -- cannot be NULL in this case
,sub bool NOT NULL -- cannot be NULL in this case
,additional_col1 text
,additional_col2 text
,FOREIGN KEY (status_id, sub) REFERENCES test.status
MATCH SIMPLE ON UPDATE CASCADE -- optionally enforce sub-status
);
INSERT INTO test.status VALUES
(1, TRUE) -- can be sub ...
(1, FALSE) -- ... and main
, (2, TRUE)
, (2, FALSE)
, (3, FALSE); -- only main
Etc.
Related answers:
- MATCH FULL vs MATCH SIMPLE
- Two-column foreign key constraint only when third column is NOT NULL
- Uniqueness validation in database when validation has a condition on another table
Keep all tables
If you need all four tables for some reason not in the question consider this detailed solution to a very similar question on dba.SE:
Inheritance
... might be another option for what you describe. If you can live with some major limitations. Related answer:
qid & accept id:
(24920949, 24921180)
query:
Remove text of a field after last repeating character
soup:
Test Data
\nDECLARE @TABLE TABLE (partnum VARCHAR(100))\nINSERT INTO @TABLE VALUES \n('H24897-D-001'),\n('BHF44-82-V-1325'),\n('BKNG5222'),\n('YAKJD-78AB')\n
\nQuery
\nSELECT PartNum\n ,REVERSE(\n SUBSTRING(REVERSE(Partnum), \n CHARINDEX('-',REVERSE(Partnum)) \n , LEN(Partnum) - CHARINDEX('-',REVERSE(Partnum)) + 1)\n ) AS Result\nFROM @TABLE\n
\nOUTPUT
\n╔═════════════════╦═════════════╗\n║ PartNum ║ Result ║\n╠═════════════════╬═════════════╣\n║ H24897-D-001 ║ H24897-D- ║\n║ BHF44-82-V-1325 ║ BHF44-82-V- ║\n║ BKNG5222 ║ BKNG5222 ║\n║ YAKJD-78AB ║ YAKJD- ║\n╚═════════════════╩═════════════╝\n
\n
soup wrap:
Test Data
DECLARE @TABLE TABLE (partnum VARCHAR(100))
INSERT INTO @TABLE VALUES
('H24897-D-001'),
('BHF44-82-V-1325'),
('BKNG5222'),
('YAKJD-78AB')
Query
SELECT PartNum
,REVERSE(
SUBSTRING(REVERSE(Partnum),
CHARINDEX('-',REVERSE(Partnum))
, LEN(Partnum) - CHARINDEX('-',REVERSE(Partnum)) + 1)
) AS Result
FROM @TABLE
OUTPUT
╔═════════════════╦═════════════╗
║ PartNum ║ Result ║
╠═════════════════╬═════════════╣
║ H24897-D-001 ║ H24897-D- ║
║ BHF44-82-V-1325 ║ BHF44-82-V- ║
║ BKNG5222 ║ BKNG5222 ║
║ YAKJD-78AB ║ YAKJD- ║
╚═════════════════╩═════════════╝
qid & accept id:
(24921796, 24921834)
query:
SUM subquery for total amount for each line
soup:
SELECT\n O.FileNumber,\n O.CloseDate,\n SUM(CL.Amount) as Total\nFROM dbo.Orders O\n LEFT JOIN dbo.Checks C\n ON O.OrdersID = C.OrdersID\n LEFT JOIN dbo.CheckLine CL\n ON C.ChecksID = CL.ChecksID\n GROUP BY O.FileNumber, O.CloseDate\n
\nWhen you calculate Total in a subquery, that value will be treated as constant by SQL Server that will repeat every row.
\nIt is very common to confuse GROUP BY with DISTINCT (please look at here and here) since they return the same values if no aggregation function is in the SELECT clause. In your example:
\nSELECT DISTINCT FileNumber FROM ORDERS \n
\nwill return the same of
\nSELECT FileNumber FROM ORDERS GROUP BY FileNumber\n
\nUse GROUP BY if you are wanting to aggregate information (like your field TOTAL).
\n
soup wrap:
SELECT
O.FileNumber,
O.CloseDate,
SUM(CL.Amount) as Total
FROM dbo.Orders O
LEFT JOIN dbo.Checks C
ON O.OrdersID = C.OrdersID
LEFT JOIN dbo.CheckLine CL
ON C.ChecksID = CL.ChecksID
GROUP BY O.FileNumber, O.CloseDate
When you calculate Total in a subquery, that value will be treated as constant by SQL Server that will repeat every row.
It is very common to confuse GROUP BY with DISTINCT (please look at here and here) since they return the same values if no aggregation function is in the SELECT clause. In your example:
SELECT DISTINCT FileNumber FROM ORDERS
will return the same of
SELECT FileNumber FROM ORDERS GROUP BY FileNumber
Use GROUP BY if you are wanting to aggregate information (like your field TOTAL).
qid & accept id:
(24939702, 24940006)
query:
Increase Date datatype by Number
soup:
You can try use the dateadd function here. This function takes a specific value, and adds it to a specified date. You can add days, years, minutes, hours, and so on. In your case, you want to add minutes, and since you are adding to the already existing scheddate, you will use that as a parameter.
\nHere's what the syntax may look like:
\nUPDATE scpomgr.schedrcpts sr\nSET sr.scheddate = dateadd(\n minute, \n (SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc),\n (SELECT sr.scheddate)\n );\n
\nThis will add minutes (specified by the first parameter), to the sr.scheddate (specified by the third parameter). The minutes that will be added are the n.translead time (specified by the second parameter).
\nRight now, this makes the assumption that selecting the sr.scheddate and select n.transleadtime that you have will only return 1 value. If they return more, you may have to adjust your where statement or limit the result set.
\nI also took out the NVL function, but if you want to protect against null values I would put them in the second and/or third parameters. Definitely in the second, but if your scheddate column doesn't accept null values, then you won't need it.
\nUPDATE scpomgr.schedrcpts sr\nSET sr.scheddate = dateadd(\n minute, \n NVL((SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc), 0),\n (SELECT sr.scheddate)\n );\n
\nI can't test this at the moment, so it may take some tweaking, but start there and let me know how we can improve it.
\nEDIT
\nIf you're looking for the highest transleadtime, I do think the MAX function would be the simplest way. Try adjusting the subquery in the second parameter to:
\nSELECT MAX(n.transleadtime) FROM scpomgr.network n WHERE n.source = sr.loc\n
\n
soup wrap:
You can try use the dateadd function here. This function takes a specific value, and adds it to a specified date. You can add days, years, minutes, hours, and so on. In your case, you want to add minutes, and since you are adding to the already existing scheddate, you will use that as a parameter.
Here's what the syntax may look like:
UPDATE scpomgr.schedrcpts sr
SET sr.scheddate = dateadd(
minute,
(SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc),
(SELECT sr.scheddate)
);
This will add minutes (specified by the first parameter), to the sr.scheddate (specified by the third parameter). The minutes that will be added are the n.translead time (specified by the second parameter).
Right now, this makes the assumption that selecting the sr.scheddate and select n.transleadtime that you have will only return 1 value. If they return more, you may have to adjust your where statement or limit the result set.
I also took out the NVL function, but if you want to protect against null values I would put them in the second and/or third parameters. Definitely in the second, but if your scheddate column doesn't accept null values, then you won't need it.
UPDATE scpomgr.schedrcpts sr
SET sr.scheddate = dateadd(
minute,
NVL((SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc), 0),
(SELECT sr.scheddate)
);
I can't test this at the moment, so it may take some tweaking, but start there and let me know how we can improve it.
EDIT
If you're looking for the highest transleadtime, I do think the MAX function would be the simplest way. Try adjusting the subquery in the second parameter to:
SELECT MAX(n.transleadtime) FROM scpomgr.network n WHERE n.source = sr.loc
qid & accept id:
(24970105, 24970139)
query:
How do I find the shortest bus route when there is more than 1 switch?
soup:
I have posted such a thing a little while ago, here:\nGraph problems: connect by NOCYCLE prior replacement in SQL server?
\nYou'll find further going tips here, where i cross-posted the question:
\nhttp://social.msdn.microsoft.com/Forums/sqlserver/en-US/32069da7-4820-490a-a8b7-09900ea1de69/is-there-a-nocycle-prior-replacement-in-sql-server?forum=transactsql
\n
\nCREATE TABLE [dbo].[T_Hops](\n [UID] [uniqueidentifier] NULL,\n [From] [nvarchar](1000) NULL,\n [To] [nvarchar](1000) NULL,\n [Distance] [decimal](18, 5) NULL\n) ON [PRIMARY]\n\nGO\n\n\n\n\n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'E' ,10.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'E' ,'D' ,20.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'B' ,5.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'B' ,'C' ,10.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'C' ,'D' ,5.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'F' ,2.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'F' ,'G' ,6.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'G' ,'H' ,3.00000 ); \n INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'H' ,'D' ,1.00000 ); \n
\nNow I can query the best connection from point x to point y like this:
\nWITH AllRoutes \n(\n [UID]\n ,[FROM]\n ,[To]\n ,[Distance]\n ,[Path]\n ,[Hops]\n)\nAS\n(\n SELECT \n [UID]\n ,[FROM]\n ,[To]\n ,[Distance]\n ,CAST(([dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]\n ,1 AS [Hops]\n FROM [dbo].[T_Hops]\n WHERE [FROM] = 'A'\n\n UNION ALL\n\n\n SELECT \n [dbo].[T_Hops].[UID]\n --,[dbo].[T_Hops].[FROM]\n ,Parent.[FROM]\n ,[dbo].[T_Hops].[To]\n ,CAST((Parent.[Distance] + [dbo].[T_Hops].[Distance]) AS [decimal](18, 5)) AS distance\n ,CAST((Parent.[Path] + '/' + [dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]\n ,(Parent.[Hops] + 1) AS [Hops]\n FROM [dbo].[T_Hops]\nINNER JOIN AllRoutes AS Parent \n ON Parent.[To] = [dbo].[T_Hops].[FROM] \n\n)\n\nSELECT TOP 100 PERCENT * FROM AllRoutes\n\n\n/*\nWHERE [FROM] = 'A' \nAND [To] = 'D'\nAND CHARINDEX('F', [Path]) != 0 -- via F\nORDER BY Hops, Distance ASC\n*/\n\nGO\n
\n
soup wrap:
I have posted such a thing a little while ago, here:
Graph problems: connect by NOCYCLE prior replacement in SQL server?
You'll find further going tips here, where i cross-posted the question:
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/32069da7-4820-490a-a8b7-09900ea1de69/is-there-a-nocycle-prior-replacement-in-sql-server?forum=transactsql

CREATE TABLE [dbo].[T_Hops](
[UID] [uniqueidentifier] NULL,
[From] [nvarchar](1000) NULL,
[To] [nvarchar](1000) NULL,
[Distance] [decimal](18, 5) NULL
) ON [PRIMARY]
GO
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'E' ,10.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'E' ,'D' ,20.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'B' ,5.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'B' ,'C' ,10.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'C' ,'D' ,5.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'A' ,'F' ,2.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'F' ,'G' ,6.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'G' ,'H' ,3.00000 );
INSERT INTO [dbo].[T_Hops] ([UID] ,[From] ,[To] ,[Distance]) VALUES (newid() ,'H' ,'D' ,1.00000 );
Now I can query the best connection from point x to point y like this:
WITH AllRoutes
(
[UID]
,[FROM]
,[To]
,[Distance]
,[Path]
,[Hops]
)
AS
(
SELECT
[UID]
,[FROM]
,[To]
,[Distance]
,CAST(([dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]
,1 AS [Hops]
FROM [dbo].[T_Hops]
WHERE [FROM] = 'A'
UNION ALL
SELECT
[dbo].[T_Hops].[UID]
--,[dbo].[T_Hops].[FROM]
,Parent.[FROM]
,[dbo].[T_Hops].[To]
,CAST((Parent.[Distance] + [dbo].[T_Hops].[Distance]) AS [decimal](18, 5)) AS distance
,CAST((Parent.[Path] + '/' + [dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]
,(Parent.[Hops] + 1) AS [Hops]
FROM [dbo].[T_Hops]
INNER JOIN AllRoutes AS Parent
ON Parent.[To] = [dbo].[T_Hops].[FROM]
)
SELECT TOP 100 PERCENT * FROM AllRoutes
/*
WHERE [FROM] = 'A'
AND [To] = 'D'
AND CHARINDEX('F', [Path]) != 0 -- via F
ORDER BY Hops, Distance ASC
*/
GO
qid & accept id:
(25032106, 25032320)
query:
selection based on certain condition
soup:
SELECT col1,\n col2,\n col3\nFROM (SELECT col1,\n col2,\n col3,\n sum(col2) OVER (PARTITION BY col1) sum_col2\n FROM tab1)\nWHERE ( ( sum_col2 <> 0\n AND col2 <> 0)\n OR sum_col2 = 0)\n
\nIf col2 can be negative and the requirement is that the sum of col2 has "non-zero" data then the above is OK, however, if it is the requirement that any col2 value has "non-zero" data then it should be changed to:
\nSELECT col1,\n col2,\n col3\nFROM (SELECT col1,\n col2,\n col3,\n sum(abs(col2)) OVER (PARTITION BY col1) sum_col2\n FROM tab1)\nWHERE ( ( sum_col2 <> 0\n AND col2 <> 0)\n OR sum_col2 = 0)\n
\n
soup wrap:
SELECT col1,
col2,
col3
FROM (SELECT col1,
col2,
col3,
sum(col2) OVER (PARTITION BY col1) sum_col2
FROM tab1)
WHERE ( ( sum_col2 <> 0
AND col2 <> 0)
OR sum_col2 = 0)
If col2 can be negative and the requirement is that the sum of col2 has "non-zero" data then the above is OK, however, if it is the requirement that any col2 value has "non-zero" data then it should be changed to:
SELECT col1,
col2,
col3
FROM (SELECT col1,
col2,
col3,
sum(abs(col2)) OVER (PARTITION BY col1) sum_col2
FROM tab1)
WHERE ( ( sum_col2 <> 0
AND col2 <> 0)
OR sum_col2 = 0)
qid & accept id:
(25036420, 25036494)
query:
Shift manipulation in SQL to get counts
soup:
I think you can get what you want using conditional aggregation:
\nSELECT EID,\n sum(case when shift = 'd' then 1 else 0 end) as dayshifts,\n sum(case when shift = 'n' then 1 else 0 end) as nightshifts,\n count(*) as total\nFROM Attendance a\nWHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND\n CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND\n PID = 'A002';\n
\nEDIT:
\nIf you want counts of distinct dates for the total, then use count(distinct):
\nSELECT EID,\n sum(case when shift = 'd' then 1 else 0 end) as dayshifts,\n sum(case when shift = 'n' then 1 else 0 end) as nightshifts,\n count(distinct case when shift in ('d', 'n') then cast(in_time as date) end) as total\nFROM Attendance a\nWHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND\n CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND\n PID = 'A002';\n
\n
soup wrap:
I think you can get what you want using conditional aggregation:
SELECT EID,
sum(case when shift = 'd' then 1 else 0 end) as dayshifts,
sum(case when shift = 'n' then 1 else 0 end) as nightshifts,
count(*) as total
FROM Attendance a
WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND
CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND
PID = 'A002';
EDIT:
If you want counts of distinct dates for the total, then use count(distinct):
SELECT EID,
sum(case when shift = 'd' then 1 else 0 end) as dayshifts,
sum(case when shift = 'n' then 1 else 0 end) as nightshifts,
count(distinct case when shift in ('d', 'n') then cast(in_time as date) end) as total
FROM Attendance a
WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND
CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND
PID = 'A002';
qid & accept id:
(25046224, 25116669)
query:
How to update a varray type within a table with a simple update statement?
soup:
I don't believe you can update a single object's value within a varray from plain SQL, as there is no way to reference the varray index. (The link Alessandro Rossi posted seems to support this, though not necessarily for that reason). I'd be interested to be proven wrong though, of course.
\nI know you aren't keen on a PL/SQL approach but if you do have to then you could do this to just update that value:
\ndeclare\n l_object_list my_object_varray;\n cursor c is\n select l.id, l.object_list, t.*\n from my_object_table l,\n table(l.object_list) t\n where t.value1 = 10\n for update of l.object_list;\nbegin\n for r in c loop\n l_object_list := r.object_list;\n for i in 1..l_object_list.count loop\n if l_object_list(i).value1 = 10 then\n l_object_list(i).value2 := 'obj 4 upd';\n end if;\n end loop;\n\n update my_object_table\n set object_list = l_object_list\n where current of c;\n end loop;\nend;\n/\n\nanonymous block completed\n\nselect l.id, t.* from my_object_table l, table(l.object_list) t;\n\n ID VALUE1 VALUE2 VALUE3\n---------- ---------- ---------- ----------\n 1 1 object 1 10 \n 1 2 object 2 20 \n 1 3 object 3 30 \n 2 10 obj 4 upd 10 \n 2 20 object 5 20 \n 2 30 object 6 30 \n
\n\nIf you're updating other things as well then you might prefer a function that returns the object list with the relevant value updated:
\ncreate or replace function get_updated_varray(p_object_list my_object_varray,\n p_value1 number, p_new_value2 varchar2)\nreturn my_object_varray as\n l_object_list my_object_varray;\nbegin\n l_object_list := p_object_list;\n for i in 1..l_object_list.count loop\n if l_object_list(i).value1 = p_value1 then\n l_object_list(i).value2 := p_new_value2;\n end if;\n end loop;\n\n return l_object_list;\nend;\n/\n
\nThen call that as part of an update; but you still can't update your in-line view directly:
\nupdate (\n select l.id, l.object_list\n from my_object_table l, table(l.object_list) t\n where t.value1 = 10\n)\nset object_list = get_updated_varray(object_list, 10, 'obj 4 upd');\n\nSQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table\n
\nYou need to update based on relevant the ID(s):
\nupdate my_object_table\nset object_list = get_updated_varray(object_list, 10, 'obj 4 upd')\nwhere id in (\n select l.id\n from my_object_table l, table(l.object_list) t\n where t.value1 = 10\n);\n\n1 rows updated.\n\nselect l.id, t.* from my_object_table l, table(l.object_list) t;\n\n ID VALUE1 VALUE2 VALUE3\n---------- ---------- ---------- ----------\n 1 1 object 1 10 \n 1 2 object 2 20 \n 1 3 object 3 30 \n 2 10 obj 4 upd 10 \n 2 20 object 5 20 \n 2 30 object 6 30 \n
\n\nIf you wanted to hide the complexity even further you could create a view with an instead-of trigger that calls the function:
\ncreate view my_object_view as\n select l.id, t.* from my_object_table l, table(l.object_list) t\n/\n\ncreate or replace trigger my_object_view_trigger\ninstead of update on my_object_view\nbegin\n update my_object_table\n set object_list = get_updated_varray(object_list, :old.value1, :new.value2)\n where id = :old.id;\nend;\n/\n
\nThen the update is pretty much what you wanted, superficially at least:
\nupdate my_object_view\nset value2 = 'obj 4 upd'\nwhere value1 = 10;\n\n1 rows updated.\n\nselect * from my_object_view;\n\n ID VALUE1 VALUE2 VALUE3\n---------- ---------- ---------- ----------\n 1 1 object 1 10 \n 1 2 object 2 20 \n 1 3 object 3 30 \n 2 10 obj 4 upd 10 \n 2 20 object 5 20 \n 2 30 object 6 30 \n
\n\n
soup wrap:
I don't believe you can update a single object's value within a varray from plain SQL, as there is no way to reference the varray index. (The link Alessandro Rossi posted seems to support this, though not necessarily for that reason). I'd be interested to be proven wrong though, of course.
I know you aren't keen on a PL/SQL approach but if you do have to then you could do this to just update that value:
declare
l_object_list my_object_varray;
cursor c is
select l.id, l.object_list, t.*
from my_object_table l,
table(l.object_list) t
where t.value1 = 10
for update of l.object_list;
begin
for r in c loop
l_object_list := r.object_list;
for i in 1..l_object_list.count loop
if l_object_list(i).value1 = 10 then
l_object_list(i).value2 := 'obj 4 upd';
end if;
end loop;
update my_object_table
set object_list = l_object_list
where current of c;
end loop;
end;
/
anonymous block completed
select l.id, t.* from my_object_table l, table(l.object_list) t;
ID VALUE1 VALUE2 VALUE3
---------- ---------- ---------- ----------
1 1 object 1 10
1 2 object 2 20
1 3 object 3 30
2 10 obj 4 upd 10
2 20 object 5 20
2 30 object 6 30
If you're updating other things as well then you might prefer a function that returns the object list with the relevant value updated:
create or replace function get_updated_varray(p_object_list my_object_varray,
p_value1 number, p_new_value2 varchar2)
return my_object_varray as
l_object_list my_object_varray;
begin
l_object_list := p_object_list;
for i in 1..l_object_list.count loop
if l_object_list(i).value1 = p_value1 then
l_object_list(i).value2 := p_new_value2;
end if;
end loop;
return l_object_list;
end;
/
Then call that as part of an update; but you still can't update your in-line view directly:
update (
select l.id, l.object_list
from my_object_table l, table(l.object_list) t
where t.value1 = 10
)
set object_list = get_updated_varray(object_list, 10, 'obj 4 upd');
SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
You need to update based on relevant the ID(s):
update my_object_table
set object_list = get_updated_varray(object_list, 10, 'obj 4 upd')
where id in (
select l.id
from my_object_table l, table(l.object_list) t
where t.value1 = 10
);
1 rows updated.
select l.id, t.* from my_object_table l, table(l.object_list) t;
ID VALUE1 VALUE2 VALUE3
---------- ---------- ---------- ----------
1 1 object 1 10
1 2 object 2 20
1 3 object 3 30
2 10 obj 4 upd 10
2 20 object 5 20
2 30 object 6 30
If you wanted to hide the complexity even further you could create a view with an instead-of trigger that calls the function:
create view my_object_view as
select l.id, t.* from my_object_table l, table(l.object_list) t
/
create or replace trigger my_object_view_trigger
instead of update on my_object_view
begin
update my_object_table
set object_list = get_updated_varray(object_list, :old.value1, :new.value2)
where id = :old.id;
end;
/
Then the update is pretty much what you wanted, superficially at least:
update my_object_view
set value2 = 'obj 4 upd'
where value1 = 10;
1 rows updated.
select * from my_object_view;
ID VALUE1 VALUE2 VALUE3
---------- ---------- ---------- ----------
1 1 object 1 10
1 2 object 2 20
1 3 object 3 30
2 10 obj 4 upd 10
2 20 object 5 20
2 30 object 6 30
qid & accept id:
(25076117, 25076221)
query:
sqlite replace() function to perform a string replace
soup:
Just add a comma to all occurrences of 0.:
\n replace(TheColumn, '0.', ',0.')\n
\nthen remove the duplicates:
\n replace(replace(TheColumn, '0.', ',0.'), ',,', ',')\n
\nand the comma at the beginning:
\nsubstr(replace(replace(TheColumn, '0.', ',0.'), ',,', ','), 2)\n
\n
soup wrap:
Just add a comma to all occurrences of 0.:
replace(TheColumn, '0.', ',0.')
then remove the duplicates:
replace(replace(TheColumn, '0.', ',0.'), ',,', ',')
and the comma at the beginning:
substr(replace(replace(TheColumn, '0.', ',0.'), ',,', ','), 2)
qid & accept id:
(25095284, 25095313)
query:
Using LEFT JOIN to returns rows that don't have a match
soup:
SELECT a.auction_id\n FROM auctions AS a\n LEFT JOIN winners AS w\n ON a.auction_id = w.auction_id\n WHERE a.owner_id = 1234567\n AND a.is_draft = 0\n AND a.creation_in_progress = 0\n AND w.winner_id IS NULL\n
\nThis belongs in the WHERE clause:
\n AND w.winner_id IS NULL\n
\nCriteria on the outer joined table belongs in the ON clause when you want to ALLOW nulls. In this case, where you're filtering in on nulls, you put that criteria into the WHERE clause. Everything in the ON clause is designed to allow nulls.
\nHere are some examples using data from a question I answered not long ago:
\nProper use of where x is null:\nhttp://sqlfiddle.com/#!2/8936b5/2/0
\nSame thing but improperly placing that criteria into the ON clause:\nhttp://sqlfiddle.com/#!2/8936b5/3/0
\n(notice the FUNCTIONAL difference, the result is not the same, because the queries are not functionally equivalent)
\n
soup wrap:
SELECT a.auction_id
FROM auctions AS a
LEFT JOIN winners AS w
ON a.auction_id = w.auction_id
WHERE a.owner_id = 1234567
AND a.is_draft = 0
AND a.creation_in_progress = 0
AND w.winner_id IS NULL
This belongs in the WHERE clause:
AND w.winner_id IS NULL
Criteria on the outer joined table belongs in the ON clause when you want to ALLOW nulls. In this case, where you're filtering in on nulls, you put that criteria into the WHERE clause. Everything in the ON clause is designed to allow nulls.
Here are some examples using data from a question I answered not long ago:
Proper use of where x is null:
http://sqlfiddle.com/#!2/8936b5/2/0
Same thing but improperly placing that criteria into the ON clause:
http://sqlfiddle.com/#!2/8936b5/3/0
(notice the FUNCTIONAL difference, the result is not the same, because the queries are not functionally equivalent)
qid & accept id:
(25140883, 25141261)
query:
Converting XML in SQL Server
soup:
Try something like this.
\nIf you have a XML variable:
\ndeclare @xml XML = ' ';\n\nselect \n data.node.value('@en-US', 'varchar(11)') my_column\nfrom @xml.nodes('locale') data(node);\n
\nIn your case, for a table's column (sorry for not given this example first):
\ncreate table dbo.example_xml\n(\n my_column XML not null\n);\ngo\n\ninsert into dbo.example_xml\nvalues(' ');\ngo\n\nselect\n my_column.value('(/locale/@en-US)[1]', 'varchar(11)') [en-US]\nfrom dbo.example_xml;\ngo\n
\nHope it helps.
\n
soup wrap:
Try something like this.
If you have a XML variable:
declare @xml XML = ' ';
select
data.node.value('@en-US', 'varchar(11)') my_column
from @xml.nodes('locale') data(node);
In your case, for a table's column (sorry for not given this example first):
create table dbo.example_xml
(
my_column XML not null
);
go
insert into dbo.example_xml
values(' ');
go
select
my_column.value('(/locale/@en-US)[1]', 'varchar(11)') [en-US]
from dbo.example_xml;
go
Hope it helps.
qid & accept id:
(25144691, 25144760)
query:
MySQL counting and sorting rows returned from a query
soup:
Just add an aggregate function (e.g. COUNT() or SUM()) in the SELECT list, and add a GROUP BY clause to the query, and an ORDER BY clause to the query.
\nSELECT U.username\n , COUNT(Q.question_id)\n FROM ...\n\n GROUP BY Q.author_id\n ORDER BY COUNT(Q.question_id) DESC\n
\n
\nNote that the predicate on the role column in the WHERE clause of your query negates the "outerness" of the LEFT JOIN operation. (With the LEFT JOIN, any rows from Q that don't find a matching row in U, will return NULL for all of the columns in U. Adding a predicate U.role = '0' in the WHERE clause will cause any rows with a NULL value in U.role to be excluded.
\n
\nThis would return distinct values of username, along with a "count" of the questions related to that user:
\nSELECT U.username\n , COUNT(Q.question_id)\n FROM p1209279x.questions Q\n JOIN p1209279x.users U\n ON U.user_id=Q.author_id\n WHERE Q.approved='Y'\n AND Q.role='0'\n GROUP BY Q.author_id\n ORDER BY COUNT(Q.question_id) DESC\n
\n
soup wrap:
Just add an aggregate function (e.g. COUNT() or SUM()) in the SELECT list, and add a GROUP BY clause to the query, and an ORDER BY clause to the query.
SELECT U.username
, COUNT(Q.question_id)
FROM ...
GROUP BY Q.author_id
ORDER BY COUNT(Q.question_id) DESC
Note that the predicate on the role column in the WHERE clause of your query negates the "outerness" of the LEFT JOIN operation. (With the LEFT JOIN, any rows from Q that don't find a matching row in U, will return NULL for all of the columns in U. Adding a predicate U.role = '0' in the WHERE clause will cause any rows with a NULL value in U.role to be excluded.
This would return distinct values of username, along with a "count" of the questions related to that user:
SELECT U.username
, COUNT(Q.question_id)
FROM p1209279x.questions Q
JOIN p1209279x.users U
ON U.user_id=Q.author_id
WHERE Q.approved='Y'
AND Q.role='0'
GROUP BY Q.author_id
ORDER BY COUNT(Q.question_id) DESC
qid & accept id:
(25207558, 25207609)
query:
Get first 100 records in a table
soup:
Try this to get the 100 records:
\n select \np.attr_value product,\nm.attr_value model,\nu.attr_value usage,\nl.attr_value location\n from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid\n join table4 t4 on t4.loc_id = t1.loc_id\n join table3 p on t2.e_cid = p.e_cid \n join table3 m on t2.e_cid = m.e_cid \n join table3 u on t2.e_cid = u.e_cid \n Where\n t4.attr_name = 'SiteName' \n and p.attr_name = 'Product'\n and m.attr_name = 'Model'\n and u.attr_name = 'Usage'\n and ROWNUM <= 100\n order by product,location;\n
\nAlso note that Oracle applies rownum to the result after it has been returned.
\nHowever you may try to check if the value exists in the table using this:
\nselect case \n when exists (select 1\n from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid\n join table4 t4 on t4.loc_id = t1.loc_id\n join table3 p on t2.e_cid = p.e_cid \n join table3 m on t2.e_cid = m.e_cid \n join table3 u on t2.e_cid = u.e_cid \n Where\n t4.attr_name = 'SiteName' \n and p.attr_name = 'Product'\n and m.attr_name = 'Model'\n and u.attr_name = 'Usage'\n order by product,location;\n) \n then 'Y' \n else 'N' \n end as rec_exists\nfrom dual;\n
\n
soup wrap:
Try this to get the 100 records:
select
p.attr_value product,
m.attr_value model,
u.attr_value usage,
l.attr_value location
from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid
join table4 t4 on t4.loc_id = t1.loc_id
join table3 p on t2.e_cid = p.e_cid
join table3 m on t2.e_cid = m.e_cid
join table3 u on t2.e_cid = u.e_cid
Where
t4.attr_name = 'SiteName'
and p.attr_name = 'Product'
and m.attr_name = 'Model'
and u.attr_name = 'Usage'
and ROWNUM <= 100
order by product,location;
Also note that Oracle applies rownum to the result after it has been returned.
However you may try to check if the value exists in the table using this:
select case
when exists (select 1
from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid
join table4 t4 on t4.loc_id = t1.loc_id
join table3 p on t2.e_cid = p.e_cid
join table3 m on t2.e_cid = m.e_cid
join table3 u on t2.e_cid = u.e_cid
Where
t4.attr_name = 'SiteName'
and p.attr_name = 'Product'
and m.attr_name = 'Model'
and u.attr_name = 'Usage'
order by product,location;
)
then 'Y'
else 'N'
end as rec_exists
from dual;
qid & accept id:
(25238315, 25238476)
query:
select distinct of a column and order by date column but without showing the date column
soup:
Try this query:
\nWITH Names AS (\n SELECT\n Name,\n Seq = Dense_Rank() OVER (ORDER BY SomeDate)\n - Dense_Rank() OVER (PARTITION BY Name ORDER BY SomeDate)\n FROM\n dbo.Names\n)\nSELECT Name\nFROM Names\nGROUP BY Name, Seq\nORDER BY Min(Seq)\n;\n
\n\nThis will return the A, B, A pattern you requested.
\nYou can't use a simple DISTINCT because you're asking to display a single value, but order by all the dates that the value may have associated with it. What if your data looks like this?
\nName Date\n---- ----\nA 2014-01-01\nB 2014-02-01\nB 2014-03-01\nA 2014-04-01\n
\nHow do you decide whether to put A first, or B first, based one some theoretical ordering by the date?
\nThat is why I had to do the above subtraction of windowing functions, which should order things how you want.
\nNotes
\nI call this technique a "simulated PREORDER BY". Dense_Rank does not offer any way to preorder the rows before ranking based on ordering. If you could do Dense_Rank() OVER (PREORDER BY Date ORDER BY Name) to indicate that you want to order by Date first, but don't want it to be part of the resulting rank calculation, you'd be set! However, that doesn't exist. After some study a while back I hit on the idea to use a combination of windowing functions to accomplish the purpose, and the above query represents that result.
\nNote that you must also GROUP BY the Name, not just the resulting subtracted windowing expressions, in order for everything to work correctly, because the expression, while unique to the other column (in this case, Name), can result in duplicate values across the entire set (two different value Names can have the same expression result). You can assign a new rank or other windowing function if there is a desire for a value that can be ordered by individually.
\n
soup wrap:
Try this query:
WITH Names AS (
SELECT
Name,
Seq = Dense_Rank() OVER (ORDER BY SomeDate)
- Dense_Rank() OVER (PARTITION BY Name ORDER BY SomeDate)
FROM
dbo.Names
)
SELECT Name
FROM Names
GROUP BY Name, Seq
ORDER BY Min(Seq)
;
This will return the A, B, A pattern you requested.
You can't use a simple DISTINCT because you're asking to display a single value, but order by all the dates that the value may have associated with it. What if your data looks like this?
Name Date
---- ----
A 2014-01-01
B 2014-02-01
B 2014-03-01
A 2014-04-01
How do you decide whether to put A first, or B first, based one some theoretical ordering by the date?
That is why I had to do the above subtraction of windowing functions, which should order things how you want.
Notes
I call this technique a "simulated PREORDER BY". Dense_Rank does not offer any way to preorder the rows before ranking based on ordering. If you could do Dense_Rank() OVER (PREORDER BY Date ORDER BY Name) to indicate that you want to order by Date first, but don't want it to be part of the resulting rank calculation, you'd be set! However, that doesn't exist. After some study a while back I hit on the idea to use a combination of windowing functions to accomplish the purpose, and the above query represents that result.
Note that you must also GROUP BY the Name, not just the resulting subtracted windowing expressions, in order for everything to work correctly, because the expression, while unique to the other column (in this case, Name), can result in duplicate values across the entire set (two different value Names can have the same expression result). You can assign a new rank or other windowing function if there is a desire for a value that can be ordered by individually.
qid & accept id:
(25259434, 25260812)
query:
Simple fetch ASP prepared statement
soup:
this will not work in classic asp:
\nDim cmdPrep1 As New ADODB.Command\n
\nyou have to use server.createobject like so:
\ndim cmdPrep1 : set cmdPrep1 = server.createobject("ADODB.Command")\n\ncmdPrep1.ActiveConnection = cn\ncmdPrep1.CommandType = adCmdText\ncmdPrep1.CommandText = "SELECT ID,NAME FROM MEMBERS WHERE ID =?"\n\n\ncmdPrep1.parameters.Append cmd.createParameter( "ID", adInteger, , , Request.Form("nameOfIDField") )\n\ndim rs : set rs = cmdPrep1.execute\n
\nnow you have an ADODB.Recordset in your variable rs.
\n
soup wrap:
this will not work in classic asp:
Dim cmdPrep1 As New ADODB.Command
you have to use server.createobject like so:
dim cmdPrep1 : set cmdPrep1 = server.createobject("ADODB.Command")
cmdPrep1.ActiveConnection = cn
cmdPrep1.CommandType = adCmdText
cmdPrep1.CommandText = "SELECT ID,NAME FROM MEMBERS WHERE ID =?"
cmdPrep1.parameters.Append cmd.createParameter( "ID", adInteger, , , Request.Form("nameOfIDField") )
dim rs : set rs = cmdPrep1.execute
now you have an ADODB.Recordset in your variable rs.
qid & accept id:
(25275552, 25277245)
query:
MySQL UPDATE - SET field in column to 1, all other fields to 0, with one query
soup:
I think you want this logic:
\nUPDATE table\n SET frontpage = (case when poll_id = '555' then '1' else '0' end)\n WHERE user_id = '999';\n
\nAs a note: if the constants should really be integers, then drop the single quotes. In fact, you can then simplify the query to:
\nUPDATE table\n SET frontpage = (poll_id = 555)\n WHERE user_id = 999;\n
\n
soup wrap:
I think you want this logic:
UPDATE table
SET frontpage = (case when poll_id = '555' then '1' else '0' end)
WHERE user_id = '999';
As a note: if the constants should really be integers, then drop the single quotes. In fact, you can then simplify the query to:
UPDATE table
SET frontpage = (poll_id = 555)
WHERE user_id = 999;
qid & accept id:
(25292138, 25292502)
query:
How to convert SQL Server Query into Access
soup:
A direct translation into Access would be:
\nselect * from tblClient\nwhere company & dba1 & dba2 & dba3 like '*jbl*'\n
\nEDIT:\nTo make an exact match, you could do:
\nselect * from tblClient\nwhere '|' & company & '|' & dba1 & '|' & dba2 & '|' & dba3 & '|' like '*|' & 'jbl' & '|*'\n
\n
soup wrap:
A direct translation into Access would be:
select * from tblClient
where company & dba1 & dba2 & dba3 like '*jbl*'
EDIT:
To make an exact match, you could do:
select * from tblClient
where '|' & company & '|' & dba1 & '|' & dba2 & '|' & dba3 & '|' like '*|' & 'jbl' & '|*'
qid & accept id:
(25319348, 25326206)
query:
Unpivot Multiple Columns in MySQL
soup:
For this suggestion I have created a simple 50 row table called TransPoser, there may already be a table of integers available in MySQL or in your db, but you want something similar that will give your number 1 to N for those numbered columns.
\nThen, using that table, cross join to your non-normalized table (I call it BadTable) but restrict this to the first row. Then using a set of case expressions we pivot those date strings into a column. It would be possible to convert to a proper date as we do this if needed (I would suggest it, but haven't included it).
\nThis small transposition is then used as a derived table in the main query.
\nThe main query ignores that first row, but also uses a cross join to force all original rows into the 50 rows (or 4 as we see in this example). This Cartesian product is then joined back to the derived table discussed above to supply the dates. Then it is another set of case expressions that transpose the percentages into a column, aligned to the date and various codes.
\nExample result (from sample data), blank lines added manually:
\n| N | CODE | DESC | CODE_0 | DESC_0 | THEDATE | PERCENTAGE |\n|---|-------|------|--------|--------|-----------|------------|\n| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |\n| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |\n| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |\n| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |\n| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |\n| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |\n\n| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |\n| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |\n| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |\n| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |\n| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |\n| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |\n\n| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |\n| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |\n| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |\n| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |\n| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |\n| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |\n\n| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |\n| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |\n| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |\n| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |\n| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |\n| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |\n
\nThe query:
\nselect\n n.n\n , b.Code\n , b.Desc\n , b.Code_0\n , b.Desc_0\n , T.theDate\n , case\n when n.n = 1 then `1`\n when n.n = 2 then `2`\n when n.n = 3 then `3`\n when n.n = 4 then `4`\n /* when n.n = 5 then `5` */\n /* when n.n = 50 then `50`*/\n end as Percentage\nfrom BadTable as B\ncross join (select N from TransPoser where N < 5) as N\ninner join (\n /* transpose just the date row */\n /* join back vis the number given to each row */\n select\n n.n\n , case\n when n.n = 1 then `1`\n when n.n = 2 then `2`\n when n.n = 3 then `3`\n when n.n = 4 then `4`\n /* when n.n = 5 then `5` */\n /* when n.n = 50 then `50`*/\n end as theDate\n from BadTable as B\n cross join (select N from TransPoser where N < 5) as N\n where b.code is null\n and b.Period = 'Date'\n ) as T on N.N = T.N\nwhere b.code is NOT null\nand b.Period <> 'Date'\norder by\n n.n\n , b.code\n;\n
\nfor the above see this SQLFIDDLE
\nIt really isn't fair to expect a fully prepared executable deliverable as the result of a question IMHO - it is "stretching the friendship". But to morph the above query into a dynamic query isn't too hard. it's just a bit "tedious" as the syntax is a bit tricky. I'm not that experienced with MySQL but this is how I would do it:
\nset @numcols := 4;\nset @casevar := '';\n\nset @casevar := (\n select \n group_concat(@casevar\n ,'when n.n = '\n , n.n\n ,' then `'\n , n.n\n ,'`'\n SEPARATOR ' ')\n from TransPoser as n\n where n.n <= @numcols\n )\n;\n\n\nset @sqlvar := concat(\n 'SELECT n.n , b.Code , b.Desc , b.Code_0 , b.Desc_0 , T.theDate , CASE '\n , @casevar\n , ' END AS Percentage FROM BadTable AS B CROSS JOIN (SELECT N FROM TransPoser WHERE N <='\n , @numcols\n , ') AS N INNER JOIN ( SELECT n.n , CASE '\n , @casevar \n , ' END AS theDate FROM BadTable AS B CROSS JOIN (SELECT N FROM TransPoser WHERE N <='\n , @numcols\n , ') AS N WHERE b.code IS NULL '\n , ' AND b.Period = ''Date'' ) AS T ON N.N = T.N WHERE b.code IS NOT NULL AND b.Period <> ''Date'' ORDER BY n.n , b.code ' \n );\n\nPREPARE stmt FROM @sqlvar;\nEXECUTE stmt;\n
\n\n
soup wrap:
For this suggestion I have created a simple 50 row table called TransPoser, there may already be a table of integers available in MySQL or in your db, but you want something similar that will give your number 1 to N for those numbered columns.
Then, using that table, cross join to your non-normalized table (I call it BadTable) but restrict this to the first row. Then using a set of case expressions we pivot those date strings into a column. It would be possible to convert to a proper date as we do this if needed (I would suggest it, but haven't included it).
This small transposition is then used as a derived table in the main query.
The main query ignores that first row, but also uses a cross join to force all original rows into the 50 rows (or 4 as we see in this example). This Cartesian product is then joined back to the derived table discussed above to supply the dates. Then it is another set of case expressions that transpose the percentages into a column, aligned to the date and various codes.
Example result (from sample data), blank lines added manually:
| N | CODE | DESC | CODE_0 | DESC_0 | THEDATE | PERCENTAGE |
|---|-------|------|--------|--------|-----------|------------|
| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |
| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |
| 1 | CTR07 | Risk | P1 | Phase1 | 29-Nov-13 | 0.2 |
| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |
| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |
| 1 | CTR08 | Oper | P1 | Phase1 | 29-Nov-13 | 0.6 |
| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |
| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |
| 2 | CTR07 | Risk | P1 | Phase1 | 6-Dec-13 | 0.4 |
| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |
| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |
| 2 | CTR08 | Oper | P1 | Phase1 | 6-Dec-13 | 0.6 |
| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |
| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |
| 3 | CTR07 | Risk | P1 | Phase1 | 13-Dec-13 | 0.6 |
| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |
| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |
| 3 | CTR08 | Oper | P1 | Phase1 | 13-Dec-13 | 0.9 |
| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |
| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |
| 4 | CTR07 | Risk | P1 | Phase1 | 20-Dec-13 | 1.1 |
| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |
| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |
| 4 | CTR08 | Oper | P1 | Phase1 | 20-Dec-13 | 2.7 |
The query:
select
n.n
, b.Code
, b.Desc
, b.Code_0
, b.Desc_0
, T.theDate
, case
when n.n = 1 then `1`
when n.n = 2 then `2`
when n.n = 3 then `3`
when n.n = 4 then `4`
/* when n.n = 5 then `5` */
/* when n.n = 50 then `50`*/
end as Percentage
from BadTable as B
cross join (select N from TransPoser where N < 5) as N
inner join (
/* transpose just the date row */
/* join back vis the number given to each row */
select
n.n
, case
when n.n = 1 then `1`
when n.n = 2 then `2`
when n.n = 3 then `3`
when n.n = 4 then `4`
/* when n.n = 5 then `5` */
/* when n.n = 50 then `50`*/
end as theDate
from BadTable as B
cross join (select N from TransPoser where N < 5) as N
where b.code is null
and b.Period = 'Date'
) as T on N.N = T.N
where b.code is NOT null
and b.Period <> 'Date'
order by
n.n
, b.code
;
for the above see this SQLFIDDLE
It really isn't fair to expect a fully prepared executable deliverable as the result of a question IMHO - it is "stretching the friendship". But to morph the above query into a dynamic query isn't too hard. it's just a bit "tedious" as the syntax is a bit tricky. I'm not that experienced with MySQL but this is how I would do it:
set @numcols := 4;
set @casevar := '';
set @casevar := (
select
group_concat(@casevar
,'when n.n = '
, n.n
,' then `'
, n.n
,'`'
SEPARATOR ' ')
from TransPoser as n
where n.n <= @numcols
)
;
set @sqlvar := concat(
'SELECT n.n , b.Code , b.Desc , b.Code_0 , b.Desc_0 , T.theDate , CASE '
, @casevar
, ' END AS Percentage FROM BadTable AS B CROSS JOIN (SELECT N FROM TransPoser WHERE N <='
, @numcols
, ') AS N INNER JOIN ( SELECT n.n , CASE '
, @casevar
, ' END AS theDate FROM BadTable AS B CROSS JOIN (SELECT N FROM TransPoser WHERE N <='
, @numcols
, ') AS N WHERE b.code IS NULL '
, ' AND b.Period = ''Date'' ) AS T ON N.N = T.N WHERE b.code IS NOT NULL AND b.Period <> ''Date'' ORDER BY n.n , b.code '
);
PREPARE stmt FROM @sqlvar;
EXECUTE stmt;
qid & accept id:
(25321698, 25321779)
query:
How to split a mysql field into two and compare string between both splited fields
soup:
Use LEFT() and RIGHT() since the length on your values is fixed and use STR_TO_DATE() to convert your string to date. Here is the example:
\nSELECT financial_year\nFROM financial_years\nWHERE STR_TO_DATE('03-05-2011','%d-%m-%Y') >= DATE( LEFT(financial_year,10) )\nAND STR_TO_DATE('03-05-2011','%d-%m-%Y') <= DATE( RIGHT(financial_year,10) );\n
\nIf the data type of financial_year is VARCHAR() you should use STR_TO_DATE() too like on this one
\nSTR_TO_DATE(LEFT(financial_year,10),'%d-%m-%Y') \n
\nand
\nSTR_TO_DATE(RIGHT(financial_year,10),'%d-%m-%Y')\n
\n
soup wrap:
Use LEFT() and RIGHT() since the length on your values is fixed and use STR_TO_DATE() to convert your string to date. Here is the example:
SELECT financial_year
FROM financial_years
WHERE STR_TO_DATE('03-05-2011','%d-%m-%Y') >= DATE( LEFT(financial_year,10) )
AND STR_TO_DATE('03-05-2011','%d-%m-%Y') <= DATE( RIGHT(financial_year,10) );
If the data type of financial_year is VARCHAR() you should use STR_TO_DATE() too like on this one
STR_TO_DATE(LEFT(financial_year,10),'%d-%m-%Y')
and
STR_TO_DATE(RIGHT(financial_year,10),'%d-%m-%Y')
qid & accept id:
(25361410, 25361526)
query:
Drop auto generated constraint name
soup:
Your con_name variable is out of scope within the DDL statement you're executing; you're trying to drop a constraint called con_name, not one named with the value that holds - as you suspected. You can't use a bind variable here so you'll need to concatenate the name:
\nDECLARE\n con_name all_constraints.constraint_name%type;\nBEGIN\n select constraint_name into con_name\n from all_constraints\n where table_name = 'MY_TABLE' and constraint_type = 'P';\n\n EXECUTE immediate 'ALTER TABLE MY_TABLE drop constraint ' || con_name;\n\n EXECUTE immediate 'ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID)';\nEND;\n/\n
\nAs Nicholas Krasnov pointed out in a comment, you don't need to do this at all; you can drop the primary key without specifying its name, without using dynamic SQL or a PL/SQL block:
\nALTER TABLE MY_TABLE DROP PRIMARY KEY;\nALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID);\n
\nHopefully you don't already have any tables with foreign key constraints against this PK.
\n
soup wrap:
Your con_name variable is out of scope within the DDL statement you're executing; you're trying to drop a constraint called con_name, not one named with the value that holds - as you suspected. You can't use a bind variable here so you'll need to concatenate the name:
DECLARE
con_name all_constraints.constraint_name%type;
BEGIN
select constraint_name into con_name
from all_constraints
where table_name = 'MY_TABLE' and constraint_type = 'P';
EXECUTE immediate 'ALTER TABLE MY_TABLE drop constraint ' || con_name;
EXECUTE immediate 'ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID)';
END;
/
As Nicholas Krasnov pointed out in a comment, you don't need to do this at all; you can drop the primary key without specifying its name, without using dynamic SQL or a PL/SQL block:
ALTER TABLE MY_TABLE DROP PRIMARY KEY;
ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID);
Hopefully you don't already have any tables with foreign key constraints against this PK.
qid & accept id:
(25380801, 25380913)
query:
Moving or inserting data to other SQL table with format
soup:
The easiest way to do this is with union all:
\nselect col0, col1, col2, col5\nfrom oldtable\nunion all\nselect col0, col1, col3, col4\nfrom oldtable\nwhere col3 is not null;\n
\nIf you want to put this into a new table, use either insert or select into. For instance:
\nselect col0, col1, col3, col4\ninto newtable\nfrom (select col0, col1, col2 as col3, col5 as col4\n from oldtable\n union all\n select col0, col1, col3, col4\n from oldtable\n where col3 is not null\n ) t\n
\n
soup wrap:
The easiest way to do this is with union all:
select col0, col1, col2, col5
from oldtable
union all
select col0, col1, col3, col4
from oldtable
where col3 is not null;
If you want to put this into a new table, use either insert or select into. For instance:
select col0, col1, col3, col4
into newtable
from (select col0, col1, col2 as col3, col5 as col4
from oldtable
union all
select col0, col1, col3, col4
from oldtable
where col3 is not null
) t
qid & accept id:
(25390857, 25391039)
query:
Replace partial value inside row
soup:
The easiest way to do this is to convert your existing URLs to something else, run the original query, and then revert them all back again.
\nThis query will replace all instances of url.com/images to [PLACEHOLDER].
\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')\nWHERE post_content LIKE '%url.com/images%';\n
\nNow run your original query to append /images to the url.com:
\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com','url.com/images')\nWHERE post_content LIKE '%url.com%';\n
\nAnd now you're free to move the [PLACEHOLDER] back:
\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')\nWHERE post_content LIKE '%[PLACEHOLDER]%';\n
\nAll in one lump, for copy & paste ease:
\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')\nWHERE post_content LIKE '%url.com/images%';\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com','url.com/images')\nWHERE post_content LIKE '%url.com%';\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')\nWHERE post_content LIKE '%[PLACEHOLDER]%';\n
\n
soup wrap:
The easiest way to do this is to convert your existing URLs to something else, run the original query, and then revert them all back again.
This query will replace all instances of url.com/images to [PLACEHOLDER].
UPDATE wp_posts
SET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')
WHERE post_content LIKE '%url.com/images%';
Now run your original query to append /images to the url.com:
UPDATE wp_posts
SET post_content = REPLACE(post_content,'url.com','url.com/images')
WHERE post_content LIKE '%url.com%';
And now you're free to move the [PLACEHOLDER] back:
UPDATE wp_posts
SET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')
WHERE post_content LIKE '%[PLACEHOLDER]%';
All in one lump, for copy & paste ease:
UPDATE wp_posts
SET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')
WHERE post_content LIKE '%url.com/images%';
UPDATE wp_posts
SET post_content = REPLACE(post_content,'url.com','url.com/images')
WHERE post_content LIKE '%url.com%';
UPDATE wp_posts
SET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')
WHERE post_content LIKE '%[PLACEHOLDER]%';
qid & accept id:
(25420950, 25421574)
query:
How do I combine 2 records with a single field into 1 row with 2 fields (Oracle 11g)?
soup:
You need to use pivot:
\nwith t(id, d) as (\n select 1, 'field1 = test2' from dual union all\n select 2, 'field1 = test3' from dual \n)\nselect *\n from t\npivot (max (d) for id in (1, 2))\n
\nIf you don't have the id field you can generate it, but you will have XML type:
\nwith t(d) as (\n select 'field1 = test2' from dual union all\n select 'field1 = test3' from dual \n), t1(id, d) as (\n select ROW_NUMBER() OVER(ORDER BY d), d from t\n)\nselect *\n from t1\npivot xml (max (d) for id in (select id from t1))\n
\n
soup wrap:
You need to use pivot:
with t(id, d) as (
select 1, 'field1 = test2' from dual union all
select 2, 'field1 = test3' from dual
)
select *
from t
pivot (max (d) for id in (1, 2))
If you don't have the id field you can generate it, but you will have XML type:
with t(d) as (
select 'field1 = test2' from dual union all
select 'field1 = test3' from dual
), t1(id, d) as (
select ROW_NUMBER() OVER(ORDER BY d), d from t
)
select *
from t1
pivot xml (max (d) for id in (select id from t1))
qid & accept id:
(25428684, 25428786)
query:
MySQL Select from three tables
soup:
something like this?
\nQUERY:
\nSELECT country, profession, MAX(money) AS money \nFROM\n( SELECT u.country, g.profession, SUM(um.money) AS money\n FROM user_money um\n JOIN users u ON u.id = um.user_id\n JOIN groups g ON g.id = um.group_id\n GROUP BY g.profession, u.country\n ORDER BY um.money DESC\n) t\nGROUP BY country\nORDER BY money DESC\n
\n\nOUTPUT:
\n+---------------+------------+-------+\n| country | profession | money |\n+---------------+------------+-------+\n| Luxembourg | Hacker | 200 |\n| Albania | Hacker | 120 |\n| United States | Boss | 55 |\n+---------------+------------+-------+\n
\n
soup wrap:
something like this?
QUERY:
SELECT country, profession, MAX(money) AS money
FROM
( SELECT u.country, g.profession, SUM(um.money) AS money
FROM user_money um
JOIN users u ON u.id = um.user_id
JOIN groups g ON g.id = um.group_id
GROUP BY g.profession, u.country
ORDER BY um.money DESC
) t
GROUP BY country
ORDER BY money DESC
OUTPUT:
+---------------+------------+-------+
| country | profession | money |
+---------------+------------+-------+
| Luxembourg | Hacker | 200 |
| Albania | Hacker | 120 |
| United States | Boss | 55 |
+---------------+------------+-------+
qid & accept id:
(25472241, 25472731)
query:
mysql most popular articles in most popular categories
soup:
To do this in MySQL you have to mimic the row_number() over (partition by category) functionality that would otherwise be available in other databases.
\nI've tested out the query below using some sample data here:
\nFidde:
\nhttp://sqlfiddle.com/#!9/2b8d9/1/0
\nQuery:
\nselect id, category_id\nfrom(\nselect x.*,\n @row_number:=case when @category_id=x.category_id then @row_number+1 else 1 end as row_number,\n @category_id:=x.category_id as grp\n from (select art.id, art.category_id, count(*) as num_art_views\n from articles art\n join (select art.category_id, count(*)\n from view_counts cnt\n join articles art\n on cnt.article_id = art.id\n group by art.category_id\n order by 2 desc limit 5) topcats\n on art.category_id = topcats.category_id\n join view_counts cnt\n on art.id = cnt.article_id\n group by art.id, art.category_id\n order by art.category_id, num_art_views desc) x\n cross join (select @row_number := 0, @category_id := '') as r\n) x where row_number <= 5\n
\nFor some clarification, this will show the top 5 articles within the top 5 categories.
\nUsing LIMIT was sufficient to get the top 5 categories, but to get the top 5 articles WITHIN each category, you have to mimic the PARTITION BY of other databases by using a variable that restarts at each change in category.
\nIt might help to understand if you run the just the inner portion, see fiddle here:\nhttp://sqlfiddle.com/#!9/2b8d9/2/0
\nThe output at that point is:
\n| ID | CATEGORY_ID | NUM_ART_VIEWS | ROW_NUMBER | GRP |\n|-----------|-------------|---------------|------------|--------|\n| article16 | autos | 2 | 1 | autos |\n| article14 | planes | 2 | 1 | planes |\n| article12 | sport | 4 | 1 | sport |\n| article3 | sport | 3 | 2 | sport |\n| article4 | sport | 3 | 3 | sport |\n| article1 | sport | 3 | 4 | sport |\n| article2 | sport | 3 | 5 | sport |\n| article5 | sport | 2 | 6 | sport |\n| article15 | trains | 2 | 1 | trains |\n| article13 | tv | 6 | 1 | tv |\n| article9 | tv | 3 | 2 | tv |\n| article6 | tv | 3 | 3 | tv |\n| article7 | tv | 3 | 4 | tv |\n| article8 | tv | 3 | 5 | tv |\n| article10 | tv | 2 | 6 | tv |\n
\nYou can easily exclude anything not <= 5 at that point (which is what the above query does).
\n
soup wrap:
To do this in MySQL you have to mimic the row_number() over (partition by category) functionality that would otherwise be available in other databases.
I've tested out the query below using some sample data here:
Fidde:
http://sqlfiddle.com/#!9/2b8d9/1/0
Query:
select id, category_id
from(
select x.*,
@row_number:=case when @category_id=x.category_id then @row_number+1 else 1 end as row_number,
@category_id:=x.category_id as grp
from (select art.id, art.category_id, count(*) as num_art_views
from articles art
join (select art.category_id, count(*)
from view_counts cnt
join articles art
on cnt.article_id = art.id
group by art.category_id
order by 2 desc limit 5) topcats
on art.category_id = topcats.category_id
join view_counts cnt
on art.id = cnt.article_id
group by art.id, art.category_id
order by art.category_id, num_art_views desc) x
cross join (select @row_number := 0, @category_id := '') as r
) x where row_number <= 5
For some clarification, this will show the top 5 articles within the top 5 categories.
Using LIMIT was sufficient to get the top 5 categories, but to get the top 5 articles WITHIN each category, you have to mimic the PARTITION BY of other databases by using a variable that restarts at each change in category.
It might help to understand if you run the just the inner portion, see fiddle here:
http://sqlfiddle.com/#!9/2b8d9/2/0
The output at that point is:
| ID | CATEGORY_ID | NUM_ART_VIEWS | ROW_NUMBER | GRP |
|-----------|-------------|---------------|------------|--------|
| article16 | autos | 2 | 1 | autos |
| article14 | planes | 2 | 1 | planes |
| article12 | sport | 4 | 1 | sport |
| article3 | sport | 3 | 2 | sport |
| article4 | sport | 3 | 3 | sport |
| article1 | sport | 3 | 4 | sport |
| article2 | sport | 3 | 5 | sport |
| article5 | sport | 2 | 6 | sport |
| article15 | trains | 2 | 1 | trains |
| article13 | tv | 6 | 1 | tv |
| article9 | tv | 3 | 2 | tv |
| article6 | tv | 3 | 3 | tv |
| article7 | tv | 3 | 4 | tv |
| article8 | tv | 3 | 5 | tv |
| article10 | tv | 2 | 6 | tv |
You can easily exclude anything not <= 5 at that point (which is what the above query does).
qid & accept id:
(25522070, 25522607)
query:
Creating a table from a Comma Separated List in Oracle (> 11g) - Input string limit 4000 chars
soup:
\nCan I make the function work with input strings greater than 4000 characters?\nYes, you can use for example CLOB
\nIs there a more effective way of achieving the same result?\nI saw in the comments of the blog a good answer, which is about a recursive solution.
\n
\njust make some datatype changes for making it to work e.g.:
\n\nchange the varchar2_table type to CLOB
\nTYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;\n
\nchange the VARCHAR2 datatype to CLOB in all p_delimstring occurences
\nchange original SUBSTR functions to DBMS_LOB.SUBSTR\n(if you need more info about that: http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89852/dbms_23b.htm)
\nCREATE OR REPLACE PACKAGE parse AS\n /*\n || Package of utility procedures for parsing delimited or fixed position strings into tables\n || of individual values, and vice versa.\n */\n TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;\n PROCEDURE delimstring_to_table\n ( p_delimstring IN CLOB\n , p_table OUT varchar2_table\n , p_nfields OUT INTEGER\n , p_delim IN VARCHAR2 DEFAULT ','\n );\n PROCEDURE table_to_delimstring\n ( p_table IN varchar2_table\n , p_delimstring OUT CLOB\n , p_delim IN VARCHAR2 DEFAULT ','\n );\nEND parse;\n/\nCREATE OR REPLACE PACKAGE BODY parse AS\n PROCEDURE delimstring_to_table\n ( p_delimstring IN CLOB\n , p_table OUT varchar2_table\n , p_nfields OUT INTEGER\n , p_delim IN VARCHAR2 DEFAULT ','\n )\n IS\n v_string CLOB := p_delimstring;\n v_nfields PLS_INTEGER := 1;\n v_table varchar2_table;\n v_delimpos PLS_INTEGER := INSTR(p_delimstring, p_delim);\n v_delimlen PLS_INTEGER := LENGTH(p_delim);\n BEGIN\n WHILE v_delimpos > 0\n LOOP\n v_table(v_nfields) := DBMS_LOB.SUBSTR(v_string,1,v_delimpos-1);\n v_string := DBMS_LOB.SUBSTR(v_string,v_delimpos+v_delimlen);\n v_nfields := v_nfields+1;\n v_delimpos := INSTR(v_string, p_delim);\n END LOOP;\n v_table(v_nfields) := v_string;\n p_table := v_table;\n p_nfields := v_nfields;\n END delimstring_to_table;\n PROCEDURE table_to_delimstring\n ( p_table IN varchar2_table\n , p_delimstring OUT CLOB\n , p_delim IN VARCHAR2 DEFAULT ','\n )\n IS\n v_nfields PLS_INTEGER := p_table.COUNT;\n v_string CLOB;\n BEGIN\n FOR i IN 1..v_nfields\n LOOP\n v_string := v_string || p_table(i);\n IF i != v_nfields THEN\n v_string := v_string || p_delim;\n END IF;\n END LOOP;\n p_delimstring := v_string;\n END table_to_delimstring;\nEND parse;\n/\n
\n
\n
soup wrap:
Can I make the function work with input strings greater than 4000 characters?
Yes, you can use for example CLOB
Is there a more effective way of achieving the same result?
I saw in the comments of the blog a good answer, which is about a recursive solution.
just make some datatype changes for making it to work e.g.:
change the varchar2_table type to CLOB
TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
change the VARCHAR2 datatype to CLOB in all p_delimstring occurences
change original SUBSTR functions to DBMS_LOB.SUBSTR
(if you need more info about that: http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89852/dbms_23b.htm)
CREATE OR REPLACE PACKAGE parse AS
/*
|| Package of utility procedures for parsing delimited or fixed position strings into tables
|| of individual values, and vice versa.
*/
TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
PROCEDURE delimstring_to_table
( p_delimstring IN CLOB
, p_table OUT varchar2_table
, p_nfields OUT INTEGER
, p_delim IN VARCHAR2 DEFAULT ','
);
PROCEDURE table_to_delimstring
( p_table IN varchar2_table
, p_delimstring OUT CLOB
, p_delim IN VARCHAR2 DEFAULT ','
);
END parse;
/
CREATE OR REPLACE PACKAGE BODY parse AS
PROCEDURE delimstring_to_table
( p_delimstring IN CLOB
, p_table OUT varchar2_table
, p_nfields OUT INTEGER
, p_delim IN VARCHAR2 DEFAULT ','
)
IS
v_string CLOB := p_delimstring;
v_nfields PLS_INTEGER := 1;
v_table varchar2_table;
v_delimpos PLS_INTEGER := INSTR(p_delimstring, p_delim);
v_delimlen PLS_INTEGER := LENGTH(p_delim);
BEGIN
WHILE v_delimpos > 0
LOOP
v_table(v_nfields) := DBMS_LOB.SUBSTR(v_string,1,v_delimpos-1);
v_string := DBMS_LOB.SUBSTR(v_string,v_delimpos+v_delimlen);
v_nfields := v_nfields+1;
v_delimpos := INSTR(v_string, p_delim);
END LOOP;
v_table(v_nfields) := v_string;
p_table := v_table;
p_nfields := v_nfields;
END delimstring_to_table;
PROCEDURE table_to_delimstring
( p_table IN varchar2_table
, p_delimstring OUT CLOB
, p_delim IN VARCHAR2 DEFAULT ','
)
IS
v_nfields PLS_INTEGER := p_table.COUNT;
v_string CLOB;
BEGIN
FOR i IN 1..v_nfields
LOOP
v_string := v_string || p_table(i);
IF i != v_nfields THEN
v_string := v_string || p_delim;
END IF;
END LOOP;
p_delimstring := v_string;
END table_to_delimstring;
END parse;
/
qid & accept id:
(25531666, 25531725)
query:
Using WHERE and ORDER BY together in Oracle 10g
soup:
remove the AND
\nselect * from employees \nWHERE job_id NOT like '%CLERK' \norder by last_name\n
\nBased on comments, with pseudo code
\nselect * from employees \nWHERE job_id != 'CLERKS'\nAND DateAppointedFielName BETWEEN StartDate AND EndDate \norder by last_name\n
\n
soup wrap:
remove the AND
select * from employees
WHERE job_id NOT like '%CLERK'
order by last_name
Based on comments, with pseudo code
select * from employees
WHERE job_id != 'CLERKS'
AND DateAppointedFielName BETWEEN StartDate AND EndDate
order by last_name
qid & accept id:
(25537492, 25537540)
query:
MySQL - Always return exactly n records
soup:
You can do this using union all and limit:
\n(SELECT Diameter\n FROM `TreeDiameters` \n WHERE TreeID = ?\n) union all\n(select NULL as Diameter\n from (select 1 as n union all select 2 union all select 3 union all select 4 union all\n select 5 union all select 6\n ) n \n)\nORDER BY Diameter DESC\nLIMIT 0, 6;\n
\nMySQL puts NULL values last with a descending sort. But you can also be specific:
\nORDER BY (Diameter is not null) DESC, Diameter DESC\n
\n
soup wrap:
You can do this using union all and limit:
(SELECT Diameter
FROM `TreeDiameters`
WHERE TreeID = ?
) union all
(select NULL as Diameter
from (select 1 as n union all select 2 union all select 3 union all select 4 union all
select 5 union all select 6
) n
)
ORDER BY Diameter DESC
LIMIT 0, 6;
MySQL puts NULL values last with a descending sort. But you can also be specific:
ORDER BY (Diameter is not null) DESC, Diameter DESC
qid & accept id:
(25538698, 25539049)
query:
Batch SQL Server Results by Max Number of Rows
soup:
First use row_number partitioned by personid to get a ranking for each row that resets back to 1 whenever a new personid is encountered. Then you can divide that by 3 (or whatever number you want for batch size) and use the a floor function to flatten out the resulting numbers into integers. You now have a batch ID for each row, but it still resets back to 1 when it reaches a new personID, so you're not done. You can then do a dense_rank() that ranks by personid plus our new "batchid_person_specific" column and get a global batchid for all rows.
\nSql Fiddle here: http://sqlfiddle.com/#!6/3c75d/18
\nThe result looks like this:
\nwith qwry as (\nSELECT \nROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId) as rownum_nofloor\n, floor((ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId)-1)/3)+1 as batchid_person_specific\n, *\nFROM TeamPersonMap \n )\nselect \nDENSE_RANK() OVER (ORDER BY PersonId, batchid_person_specific) as BatchGroupId_Final\n,* from qwry\nORDER BY PersonId\n
\n[Results][2]:
\n| BATCHGROUPID_FINAL | ROWNUM_NOFLOOR | BATCHID_PERSON_SPECIFIC | TEAMPERSONID | TEAMID | PERSONID |\n|--------------------|----------------|-------------------------|--------------|--------|----------|\n| 1 | 1 | 1 | 1 | 1 | 101 |\n| 1 | 2 | 1 | 6 | 2 | 101 |\n| 1 | 3 | 1 | 11 | 3 | 101 |\n| 2 | 4 | 2 | 16 | 4 | 101 |\n| 2 | 5 | 2 | 21 | 5 | 101 |\n| 3 | 1 | 1 | 2 | 1 | 102 |\n| 3 | 2 | 1 | 7 | 2 | 102 |\n| 3 | 3 | 1 | 12 | 3 | 102 |\n| 4 | 4 | 2 | 17 | 4 | 102 |\n| 4 | 5 | 2 | 22 | 5 | 102 |\n| 5 | 1 | 1 | 3 | 1 | 103 |\n| 5 | 2 | 1 | 8 | 2 | 103 |\n| 5 | 3 | 1 | 13 | 3 | 103 |\n| 6 | 4 | 2 | 18 | 4 | 103 |\n| 6 | 5 | 2 | 23 | 5 | 103 |\n| 7 | 1 | 1 | 4 | 1 | 104 |\n| 7 | 2 | 1 | 9 | 2 | 104 |\n| 7 | 3 | 1 | 14 | 3 | 104 |\n| 8 | 4 | 2 | 19 | 4 | 104 |\n| 8 | 5 | 2 | 24 | 5 | 104 |\n| 9 | 1 | 1 | 5 | 1 | 105 |\n| 9 | 2 | 1 | 10 | 2 | 105 |\n| 9 | 3 | 1 | 15 | 3 | 105 |\n| 10 | 4 | 2 | 20 | 4 | 105 |\n| 10 | 5 | 2 | 25 | 5 | 105 |\n
\n
soup wrap:
First use row_number partitioned by personid to get a ranking for each row that resets back to 1 whenever a new personid is encountered. Then you can divide that by 3 (or whatever number you want for batch size) and use the a floor function to flatten out the resulting numbers into integers. You now have a batch ID for each row, but it still resets back to 1 when it reaches a new personID, so you're not done. You can then do a dense_rank() that ranks by personid plus our new "batchid_person_specific" column and get a global batchid for all rows.
Sql Fiddle here: http://sqlfiddle.com/#!6/3c75d/18
The result looks like this:
with qwry as (
SELECT
ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId) as rownum_nofloor
, floor((ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId)-1)/3)+1 as batchid_person_specific
, *
FROM TeamPersonMap
)
select
DENSE_RANK() OVER (ORDER BY PersonId, batchid_person_specific) as BatchGroupId_Final
,* from qwry
ORDER BY PersonId
[Results][2]:
| BATCHGROUPID_FINAL | ROWNUM_NOFLOOR | BATCHID_PERSON_SPECIFIC | TEAMPERSONID | TEAMID | PERSONID |
|--------------------|----------------|-------------------------|--------------|--------|----------|
| 1 | 1 | 1 | 1 | 1 | 101 |
| 1 | 2 | 1 | 6 | 2 | 101 |
| 1 | 3 | 1 | 11 | 3 | 101 |
| 2 | 4 | 2 | 16 | 4 | 101 |
| 2 | 5 | 2 | 21 | 5 | 101 |
| 3 | 1 | 1 | 2 | 1 | 102 |
| 3 | 2 | 1 | 7 | 2 | 102 |
| 3 | 3 | 1 | 12 | 3 | 102 |
| 4 | 4 | 2 | 17 | 4 | 102 |
| 4 | 5 | 2 | 22 | 5 | 102 |
| 5 | 1 | 1 | 3 | 1 | 103 |
| 5 | 2 | 1 | 8 | 2 | 103 |
| 5 | 3 | 1 | 13 | 3 | 103 |
| 6 | 4 | 2 | 18 | 4 | 103 |
| 6 | 5 | 2 | 23 | 5 | 103 |
| 7 | 1 | 1 | 4 | 1 | 104 |
| 7 | 2 | 1 | 9 | 2 | 104 |
| 7 | 3 | 1 | 14 | 3 | 104 |
| 8 | 4 | 2 | 19 | 4 | 104 |
| 8 | 5 | 2 | 24 | 5 | 104 |
| 9 | 1 | 1 | 5 | 1 | 105 |
| 9 | 2 | 1 | 10 | 2 | 105 |
| 9 | 3 | 1 | 15 | 3 | 105 |
| 10 | 4 | 2 | 20 | 4 | 105 |
| 10 | 5 | 2 | 25 | 5 | 105 |
qid & accept id:
(25543723, 25544510)
query:
How to efficiently define daily constant?
soup:
You can use environment variables too.
\nWhen you retrieve your "constants" you set it in environment:
\nimport os\nos.environ['MY_DAILY_CONST_1'] = 'dailyconst1'\nos.environ['MY_DAILY_CONST_2'] = 'dailyconst2'\n...\n
\nAnd when you have to access it:
\nimport os\nmyconst1 = os.environ['MY_DAILY_CONST_1']\n...\n
\n
soup wrap:
You can use environment variables too.
When you retrieve your "constants" you set it in environment:
import os
os.environ['MY_DAILY_CONST_1'] = 'dailyconst1'
os.environ['MY_DAILY_CONST_2'] = 'dailyconst2'
...
And when you have to access it:
import os
myconst1 = os.environ['MY_DAILY_CONST_1']
...
qid & accept id:
(25547827, 25548615)
query:
Query to find foreign keys on database schema
soup:
You may use INFORMATION_SCHEMA for this:
\nSELECT \n * \nFROM \n INFORMATION_SCHEMA.TABLE_CONSTRAINTS \nWHERE \n CONSTRAINT_TYPE='FOREIGN KEY'\n
\nPossible types of constraint may be:
\n\nPRIMARY KEY for primary keys \nFOREIGN KEY for foreign keys \nUNIQUE for unique constraints \n
\nSo you're interested in FOREIGN KEY type. This will show you which table on which column has the constraint, but won't show you targeted constraint column and table. To find them, you need to use another table, INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS which has such information, so, basically, to reconstruct relation between tables, you'll need:
\nSELECT \n t.TABLE_SCHEMA, \n t.TABLE_NAME, \n r.REFERENCED_TABLE_NAME \nFROM \n INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS t \n JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS r \n ON t.CONSTRAINT_NAME=r.CONSTRAINT_NAME \nWHERE \n t.CONSTRAINT_TYPE='FOREIGN KEY'\n
\nBut that's, again, is missing columns (because it doesn't belongs to those tables) and will show only relations via FK between tables. To reconstruct full relation (i.e. with columns involved) you'll need to refer to KEY_COLUMN_USAGE table:
\nSELECT \n TABLE_SCHEMA, \n TABLE_NAME, \n COLUMN_NAME, \n REFERENCED_TABLE_SCHEMA, \n REFERENCED_TABLE_NAME, \n REFERENCED_COLUMN_NAME \nFROM \n INFORMATION_SCHEMA.KEY_COLUMN_USAGE \nWHERE \n REFERENCED_TABLE_SCHEMA IS NOT NULL\n
\nThis query will show all relations where referenced entity is not null, and, since it's applicable only in FK case - it's an answer to the question of finding FK relations. It's quite universal, but I've provided methods above since it may be useful to get info about PK or unique constraints too.
\n
soup wrap:
You may use INFORMATION_SCHEMA for this:
SELECT
*
FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE
CONSTRAINT_TYPE='FOREIGN KEY'
Possible types of constraint may be:
PRIMARY KEY for primary keys
FOREIGN KEY for foreign keys
UNIQUE for unique constraints
So you're interested in FOREIGN KEY type. This will show you which table on which column has the constraint, but won't show you targeted constraint column and table. To find them, you need to use another table, INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS which has such information, so, basically, to reconstruct relation between tables, you'll need:
SELECT
t.TABLE_SCHEMA,
t.TABLE_NAME,
r.REFERENCED_TABLE_NAME
FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS t
JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS r
ON t.CONSTRAINT_NAME=r.CONSTRAINT_NAME
WHERE
t.CONSTRAINT_TYPE='FOREIGN KEY'
But that's, again, is missing columns (because it doesn't belongs to those tables) and will show only relations via FK between tables. To reconstruct full relation (i.e. with columns involved) you'll need to refer to KEY_COLUMN_USAGE table:
SELECT
TABLE_SCHEMA,
TABLE_NAME,
COLUMN_NAME,
REFERENCED_TABLE_SCHEMA,
REFERENCED_TABLE_NAME,
REFERENCED_COLUMN_NAME
FROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE
REFERENCED_TABLE_SCHEMA IS NOT NULL
This query will show all relations where referenced entity is not null, and, since it's applicable only in FK case - it's an answer to the question of finding FK relations. It's quite universal, but I've provided methods above since it may be useful to get info about PK or unique constraints too.
qid & accept id:
(25560497, 25560525)
query:
Split Mysql Location entry into 2 columns?
soup:
Executing the query doesn't set the columns. To create the columns do:
\nalter table users_profiles add column latitude decimal(10, 4);\nalter table users_profiles add column longitude decimal(10, 4);\n
\nTo assign them use update:
\nupdate users_profiles\n set latitude = cast(SUBSTRING_INDEX(`location`, ',', 1) as decimal(10, 4)),\n longitude = cast(SUBSTRING_INDEX(location, ',', -1) as decimal(10, 4));\n
\nThe cast() operations are, strictly speaking, unnecessary. I like to be explicit about casts between strings and other types, in case something unusual happens in code. It can be hard to spot problems with implicit casts.
\n
soup wrap:
Executing the query doesn't set the columns. To create the columns do:
alter table users_profiles add column latitude decimal(10, 4);
alter table users_profiles add column longitude decimal(10, 4);
To assign them use update:
update users_profiles
set latitude = cast(SUBSTRING_INDEX(`location`, ',', 1) as decimal(10, 4)),
longitude = cast(SUBSTRING_INDEX(location, ',', -1) as decimal(10, 4));
The cast() operations are, strictly speaking, unnecessary. I like to be explicit about casts between strings and other types, in case something unusual happens in code. It can be hard to spot problems with implicit casts.
qid & accept id:
(25570210, 25601074)
query:
Identify Duplicate Xml Nodes
soup:
So, I managed to figure out what I needed to do. It's a little clunky though.
\nFirst, you need to wrap the Xml Select statement in another select against the Unit table, in order to ensure that we end up with xml representing only that unit.
\nSelect\nId,\n(\n Select\n Action, \n TriggerType,\n IU.TypeId,\n IU.Message,\n (\n Select C.Value, I.QuestionId, I.Sequence\n From UnitCondition C\n Inner Join Item I on C.ItemId = I.Id\n Where C.UnitId = IU.Id\n Order by C.Value, I.QuestionId, I.Sequence\n For XML RAW('Condition'), TYPE\n ) as Conditions\n from UnitType T\n Inner Join Unit IU on T.Id = IU.TypeId\n WHERE IU.Id = U.Id\n For XML RAW ('Unit')\n)\nFrom Unit U\n
\nThen, you can wrap this in another select, grouping the xml up by content.
\nSelect content, count(*) as cnt\nFrom\n (\n Select\n Id,\n (\n Select\n Action, \n TriggerType,\n IU.TypeId,\n IU.Message,\n (\n Select C.Value, C.ItemId, I.QuestionId, I.Sequence\n From UnitCondition C\n Inner Join Item I on C.ItemId = I.Id\n Where C.UnitId = IU.Id\n Order by C.Value, I.QuestionId, I.Sequence\n For XML RAW('Condition'), TYPE\n ) as Conditions\n from UnitType T\n Inner Join Unit IU on T.Id = IU.TypeId\n WHERE IU.Id = U.Id\n For XML RAW ('Unit')\n ) as content\n From Unit U\n ) as data\ngroup by content\nhaving count(*) > 1\n
\nThis will allow you to group entire units where the whole content is identical.
\nOne thing to watch out for though, is that to test "uniqueness", you need to guarantee that the data on the inner Xml selection(s) is always the same. To that end, you should apply ordering on the relevant data (i.e. the data in the xml) to ensure consistency. What order you apply doesn't really matter, so long as two identical collections will output in the same order.
\n
soup wrap:
So, I managed to figure out what I needed to do. It's a little clunky though.
First, you need to wrap the Xml Select statement in another select against the Unit table, in order to ensure that we end up with xml representing only that unit.
Select
Id,
(
Select
Action,
TriggerType,
IU.TypeId,
IU.Message,
(
Select C.Value, I.QuestionId, I.Sequence
From UnitCondition C
Inner Join Item I on C.ItemId = I.Id
Where C.UnitId = IU.Id
Order by C.Value, I.QuestionId, I.Sequence
For XML RAW('Condition'), TYPE
) as Conditions
from UnitType T
Inner Join Unit IU on T.Id = IU.TypeId
WHERE IU.Id = U.Id
For XML RAW ('Unit')
)
From Unit U
Then, you can wrap this in another select, grouping the xml up by content.
Select content, count(*) as cnt
From
(
Select
Id,
(
Select
Action,
TriggerType,
IU.TypeId,
IU.Message,
(
Select C.Value, C.ItemId, I.QuestionId, I.Sequence
From UnitCondition C
Inner Join Item I on C.ItemId = I.Id
Where C.UnitId = IU.Id
Order by C.Value, I.QuestionId, I.Sequence
For XML RAW('Condition'), TYPE
) as Conditions
from UnitType T
Inner Join Unit IU on T.Id = IU.TypeId
WHERE IU.Id = U.Id
For XML RAW ('Unit')
) as content
From Unit U
) as data
group by content
having count(*) > 1
This will allow you to group entire units where the whole content is identical.
One thing to watch out for though, is that to test "uniqueness", you need to guarantee that the data on the inner Xml selection(s) is always the same. To that end, you should apply ordering on the relevant data (i.e. the data in the xml) to ensure consistency. What order you apply doesn't really matter, so long as two identical collections will output in the same order.
qid & accept id:
(25579264, 25579522)
query:
MySQL: Get the MIN value of a table from all columns & rows
soup:
Here's one solution:
\nSELECT least(MIN(nullif(sgl_ro,0))\n ,MIN(nullif(sgl_bb,0))\n ,MIN(nullif(sgl_hb,0))\n ,MIN(nullif(sgl_fb,0)) ) as min_rate\nFROM room_rates\nWHERE hotel_id='1'\n;\n
\nEDIT: Use NULL instead of 'NULL'
\n'NULL' is a string and MySQL have very weird ideas on how to cast between types:
\nselect case when 0 = 'NULL' \n then 'ohoy' \n else 'sailor' \n end \nfrom room_rates;\n\nohoy\nohoy\nohoy\n
\nI.e. you solution will work fine by removing the ' from NULL:
\nSELECT\nLEAST(\nMIN(IF(sgl_ro=0,NULL,sgl_ro))\n,MIN(IF(sgl_bb=0,NULL,sgl_bb))\n,MIN(IF(sgl_hb=0,NULL,sgl_hb))\n,MIN(IF(sgl_fb=0,NULL,sgl_fb))\n) AS MinRate\nFROM room_rates\nWHERE hotel_id='1'\n;\n\nMINRATE\n9\n
\nEdit: Comparison between DBMS:
\nI tested the following scenario for all DBMS availible in sqlfiddle + DB2 10.5:
\ncreate table t(x int);\ninsert into t(x) values (1);\nselect case when 0 = 'NULL' \n then 'ohoy' \n else 'sailor' \n end \nfrom t;\n
\nAll mysql versions returned 'ohoy'
\nsql.js returned 'sailor'
\nall others (including DB2 10.5) considered the query to be illegal.
\nEdit: handle situation where all columns in a row (or all rows for a column) = 0
\nselect min(least(coalesce(nullif(sgl_ro,0), 2147483647)\n ,coalesce(nullif(sgl_bb,0), 2147483647)\n ,coalesce(nullif(sgl_hb,0), 2147483647) \n ,coalesce(nullif(sgl_fb,0), 2147483647) ) ) \nFROM room_rates\nWHERE hotel_id='1'\n AND coalesce(nullif(sgl_ro,0), nullif(sgl_bb,0)\n ,nullif(sgl_hb,0), nullif(sgl_fb,0)) IS NOT NULL; \n
\n
soup wrap:
Here's one solution:
SELECT least(MIN(nullif(sgl_ro,0))
,MIN(nullif(sgl_bb,0))
,MIN(nullif(sgl_hb,0))
,MIN(nullif(sgl_fb,0)) ) as min_rate
FROM room_rates
WHERE hotel_id='1'
;
EDIT: Use NULL instead of 'NULL'
'NULL' is a string and MySQL have very weird ideas on how to cast between types:
select case when 0 = 'NULL'
then 'ohoy'
else 'sailor'
end
from room_rates;
ohoy
ohoy
ohoy
I.e. you solution will work fine by removing the ' from NULL:
SELECT
LEAST(
MIN(IF(sgl_ro=0,NULL,sgl_ro))
,MIN(IF(sgl_bb=0,NULL,sgl_bb))
,MIN(IF(sgl_hb=0,NULL,sgl_hb))
,MIN(IF(sgl_fb=0,NULL,sgl_fb))
) AS MinRate
FROM room_rates
WHERE hotel_id='1'
;
MINRATE
9
Edit: Comparison between DBMS:
I tested the following scenario for all DBMS availible in sqlfiddle + DB2 10.5:
create table t(x int);
insert into t(x) values (1);
select case when 0 = 'NULL'
then 'ohoy'
else 'sailor'
end
from t;
All mysql versions returned 'ohoy'
sql.js returned 'sailor'
all others (including DB2 10.5) considered the query to be illegal.
Edit: handle situation where all columns in a row (or all rows for a column) = 0
select min(least(coalesce(nullif(sgl_ro,0), 2147483647)
,coalesce(nullif(sgl_bb,0), 2147483647)
,coalesce(nullif(sgl_hb,0), 2147483647)
,coalesce(nullif(sgl_fb,0), 2147483647) ) )
FROM room_rates
WHERE hotel_id='1'
AND coalesce(nullif(sgl_ro,0), nullif(sgl_bb,0)
,nullif(sgl_hb,0), nullif(sgl_fb,0)) IS NOT NULL;
qid & accept id:
(25585674, 25585719)
query:
Query for updating a table value based on the total of a column found in multiple tables
soup:
You can use a join in your update query with a union set
\nUPDATE main_trans m \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,SUM(prc) prc FROM sub_trans_a WHERE id = 'TR01'\nunion all\nSELECT id,SUM(prc) prc FROM sub_trans_b WHERE id = 'TR01'\n) t1\n) t\non(t.id = m.id)\nSET m.tot = t.prc \nWHERE m.id = 'TR01'\n
\nAlso if you have same structure for sub_trans_a and sub_trans_a so why 2 tables why not just a single table or with a single column for the type as type a or type b
\nSee Demo
\nOr if you want to update your whole main_trans table without providing id values you can do so by adding a group by in query
\nUPDATE main_trans m \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,SUM(prc) prc FROM sub_trans_a group by id\nunion all\nSELECT id,SUM(prc) prc FROM sub_trans_b group by id\n) t1 group by id\n) t\non(t.id = m.id)\nSET m.tot = t.prc \n
\nSee Demo 2
\nEdit a good suggestion by Andomar you can simplify inner query as
\nUPDATE main_trans m \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,prc FROM sub_trans_a\nunion all\nSELECT id,prc FROM sub_trans_b \n) t1 WHERE id = 'TR01'\n) t\non(t.id = m.id)\nSET m.tot = t.prc \nWHERE m.id = 'TR01'\n
\n
soup wrap:
You can use a join in your update query with a union set
UPDATE main_trans m
join
(SELECT id,SUM(prc) prc
FROM (
SELECT id,SUM(prc) prc FROM sub_trans_a WHERE id = 'TR01'
union all
SELECT id,SUM(prc) prc FROM sub_trans_b WHERE id = 'TR01'
) t1
) t
on(t.id = m.id)
SET m.tot = t.prc
WHERE m.id = 'TR01'
Also if you have same structure for sub_trans_a and sub_trans_a so why 2 tables why not just a single table or with a single column for the type as type a or type b
See Demo
Or if you want to update your whole main_trans table without providing id values you can do so by adding a group by in query
UPDATE main_trans m
join
(SELECT id,SUM(prc) prc
FROM (
SELECT id,SUM(prc) prc FROM sub_trans_a group by id
union all
SELECT id,SUM(prc) prc FROM sub_trans_b group by id
) t1 group by id
) t
on(t.id = m.id)
SET m.tot = t.prc
See Demo 2
Edit a good suggestion by Andomar you can simplify inner query as
UPDATE main_trans m
join
(SELECT id,SUM(prc) prc
FROM (
SELECT id,prc FROM sub_trans_a
union all
SELECT id,prc FROM sub_trans_b
) t1 WHERE id = 'TR01'
) t
on(t.id = m.id)
SET m.tot = t.prc
WHERE m.id = 'TR01'
qid & accept id:
(25652248, 25652374)
query:
In Oracle SQL, how do I UPDATE columns specified by a priority list?
soup:
update table1 t1\n set roleid = 11\n where roleid = 10 and\n (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) =\n (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end)\n from table1\n where projectid = t1.projectid);\n
\nEDIT:
\nSQL> create table table1 (projectid number, userid number, roleid number);\n\nTable created.\n\nSQL> insert into table1 values (101, 1, 10);\n\n1 row created.\n\nSQL> insert into table1 values (101, 2, 10);\n\n1 row created.\n\nSQL> insert into table1 values (102, 2, 10);\n\n1 row created.\n\nSQL> insert into table1 values (102, 3, 10);\n\n1 row created.\n\nSQL> insert into table1 values (103, 1, 10);\n\n1 row created.\n\nSQL> select * from table1;\n\n PROJECTID USERID ROLEID\n---------- ---------- ----------\n 101 1 10\n 101 2 10\n 102 2 10\n 102 3 10\n 103 1 10\n\nSQL> update table1 t1\n 2 set roleid = 11\n 3 where roleid = 10 and\n 4 (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) = \n 5 (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 \nthen 3 else 4 end)\n 5 from table1\n 6 where projectid = t1.projectid);\n\n3 rows updated.\n\nSQL> select * from table1; \n\n PROJECTID USERID ROLEID\n---------- ---------- ----------\n 101 1 11\n 101 2 10\n 102 2 11\n 102 3 10\n 103 1 11\n
\n
soup wrap:
update table1 t1
set roleid = 11
where roleid = 10 and
(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) =
(select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end)
from table1
where projectid = t1.projectid);
EDIT:
SQL> create table table1 (projectid number, userid number, roleid number);
Table created.
SQL> insert into table1 values (101, 1, 10);
1 row created.
SQL> insert into table1 values (101, 2, 10);
1 row created.
SQL> insert into table1 values (102, 2, 10);
1 row created.
SQL> insert into table1 values (102, 3, 10);
1 row created.
SQL> insert into table1 values (103, 1, 10);
1 row created.
SQL> select * from table1;
PROJECTID USERID ROLEID
---------- ---------- ----------
101 1 10
101 2 10
102 2 10
102 3 10
103 1 10
SQL> update table1 t1
2 set roleid = 11
3 where roleid = 10 and
4 (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) =
5 (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3
then 3 else 4 end)
5 from table1
6 where projectid = t1.projectid);
3 rows updated.
SQL> select * from table1;
PROJECTID USERID ROLEID
---------- ---------- ----------
101 1 11
101 2 10
102 2 11
102 3 10
103 1 11
qid & accept id:
(25687106, 25721023)
query:
PL/SQL: Any trick to avoid cloning of objects?
soup:
Based on Alex suggestion (use an associative array), I have created a package that encapsulates objects, so we can use them in an abstract way, as if they were references:
\ncreate or replace type cla as object -- complex class\n(\n name varchar2(10)\n);\n\n\ncreate or replace package eo as -- package to encapsulate objects\n type ao_t -- type for hash (associative array)\n is table of cla\n index by varchar2(100);\n o ao_t; -- hash of objects\nend;\n\n\ndeclare\n o1 varchar2(100);\n o2 varchar2(100);\nbegin\n o1 := 'o1'; -- objects are hash indexes now\n eo.o(o1) := new cla('hi'); -- store new object into the hash\n o2 := o1; -- assign object == assign index\n eo.o(o1).name := 'bye'; -- change object attribute\n\n dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);\n dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name); -- equal?\nend;\n
\nNow 'bye' is written twice, as expected with object references. The trick is that both o1 and o2 contain the same index (~reference) to the same object. The syntax is a bit more complex, but still very similar to standard object manipulation when accessing both attributes and methods.
\nAssigning an object to other is exactly as standard object assigning:
\no2 := o1;\n
\nSame for using an object as a function argument:
\nafunc(o1);\n
\nInternally, afunc() will just use o1 with the same special syntax to access methods or attributes (and no special syntax to assign):
\neo.o(o1).attrib := 5;\neo.o(o1).method('nice');\no3 := o1;\n
\nThe only requirement to use this trick is to add a hash (type and variable) to the eo package for each class we want to encapsulate.
\n
\nUpdate: The index value based on the variable name:
\no1 := 'o1';\n
\ncould be a problem if, for example, we create the object in a funcion, since the function would have to know all values used in the rest of the program in order to avoid repeating a value. A solution is to take the value from the hash size:
\no1 := eo.o.count;\n
\nThat takes us into other problem: The hash content is persitent (since it is into a package), so more and more objects will be added to the hash as we create objects (even if the objects are created by the same function). A solution is to remove the object from the hash when we are done with the object:
\neo.o(o1) = null;\n
\nSo the fixed program would be:
\ncreate or replace type cla as object -- complex class\n(\n name varchar2(10)\n);\n\n\ncreate or replace package eo as -- package to encapsulate objects\n type ao_t -- type for hash (associative array)\n is table of cla\n index by varchar2(100);\n o ao_t; -- hash of objects\nend;\n\n\ndeclare\n o1 varchar2(100);\n o2 varchar2(100);\nbegin\n o1 := eo.o.count; -- index based on hash size\n eo.o(o1) := new cla('hi'); -- store new object into the hash\n o2 := o1; -- assign object == assign index\n eo.o(o1).name := 'bye'; -- change object attribute\n\n dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);\n dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name); -- equal?\n\n eo.o(o1) = null; -- remove object\n eo.o(o2) = null; -- remove object (redundant)\nend;\n
\n
soup wrap:
Based on Alex suggestion (use an associative array), I have created a package that encapsulates objects, so we can use them in an abstract way, as if they were references:
create or replace type cla as object -- complex class
(
name varchar2(10)
);
create or replace package eo as -- package to encapsulate objects
type ao_t -- type for hash (associative array)
is table of cla
index by varchar2(100);
o ao_t; -- hash of objects
end;
declare
o1 varchar2(100);
o2 varchar2(100);
begin
o1 := 'o1'; -- objects are hash indexes now
eo.o(o1) := new cla('hi'); -- store new object into the hash
o2 := o1; -- assign object == assign index
eo.o(o1).name := 'bye'; -- change object attribute
dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);
dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name); -- equal?
end;
Now 'bye' is written twice, as expected with object references. The trick is that both o1 and o2 contain the same index (~reference) to the same object. The syntax is a bit more complex, but still very similar to standard object manipulation when accessing both attributes and methods.
Assigning an object to other is exactly as standard object assigning:
o2 := o1;
Same for using an object as a function argument:
afunc(o1);
Internally, afunc() will just use o1 with the same special syntax to access methods or attributes (and no special syntax to assign):
eo.o(o1).attrib := 5;
eo.o(o1).method('nice');
o3 := o1;
The only requirement to use this trick is to add a hash (type and variable) to the eo package for each class we want to encapsulate.
Update: The index value based on the variable name:
o1 := 'o1';
could be a problem if, for example, we create the object in a funcion, since the function would have to know all values used in the rest of the program in order to avoid repeating a value. A solution is to take the value from the hash size:
o1 := eo.o.count;
That takes us into other problem: The hash content is persitent (since it is into a package), so more and more objects will be added to the hash as we create objects (even if the objects are created by the same function). A solution is to remove the object from the hash when we are done with the object:
eo.o(o1) = null;
So the fixed program would be:
create or replace type cla as object -- complex class
(
name varchar2(10)
);
create or replace package eo as -- package to encapsulate objects
type ao_t -- type for hash (associative array)
is table of cla
index by varchar2(100);
o ao_t; -- hash of objects
end;
declare
o1 varchar2(100);
o2 varchar2(100);
begin
o1 := eo.o.count; -- index based on hash size
eo.o(o1) := new cla('hi'); -- store new object into the hash
o2 := o1; -- assign object == assign index
eo.o(o1).name := 'bye'; -- change object attribute
dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);
dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name); -- equal?
eo.o(o1) = null; -- remove object
eo.o(o2) = null; -- remove object (redundant)
end;
qid & accept id:
(25734598, 25734718)
query:
Get all posts for specific tag with SQL
soup:
I assume you are happy to send two requests to the database.
\nFirst, get all the posts for a given tag:
\nSELECT * FROM blog_posts bp \nWHERE EXISTS (SELECT * FROM blog_tags bt INNER JOIN\n tags t ON t.id = bt.tag_id\n WHERE bp.id = bt.post_id\n AND t.tag = @SearchTag)\n
\nSecond, you want to tags, I guess, linked to the one you are looking for via posts:
\nSELECT * FROM tags t\nWHERE EXISTS ( -- Here we link two tags via blog_tags\n SELECT * FROM blog_tags bt1 INNER JOIN\n blog_tags bt2 ON bt1.post_id = bt2.post_id\n AND bt1.tag_id != bt2.tag_id INNER JOIN\n tags t ON t.id = bt1.tag_id\n WHERE t.tag = @SearchTag\n AND t.id = bt2.tag_id\n)\n
\n
soup wrap:
I assume you are happy to send two requests to the database.
First, get all the posts for a given tag:
SELECT * FROM blog_posts bp
WHERE EXISTS (SELECT * FROM blog_tags bt INNER JOIN
tags t ON t.id = bt.tag_id
WHERE bp.id = bt.post_id
AND t.tag = @SearchTag)
Second, you want to tags, I guess, linked to the one you are looking for via posts:
SELECT * FROM tags t
WHERE EXISTS ( -- Here we link two tags via blog_tags
SELECT * FROM blog_tags bt1 INNER JOIN
blog_tags bt2 ON bt1.post_id = bt2.post_id
AND bt1.tag_id != bt2.tag_id INNER JOIN
tags t ON t.id = bt1.tag_id
WHERE t.tag = @SearchTag
AND t.id = bt2.tag_id
)
qid & accept id:
(25790263, 25791396)
query:
How to convert 2d table into 3d table using SQL
soup:
As Sean Lange, said, use a pivot clause, assuming you're on 11g or higher:
\nselect *\nfrom classes\npivot (max(class_size) as class_size\n for (class) in ('I' as i, 'II' as ii, 'III' as iii))\norder by school;\n\nSCHOOL I_CLASS_SIZE II_CLASS_SIZE III_CLASS_SIZE\n------ ------------ ------------- --------------\nS1 23 12 54 \nS2 57 12 81 \nS3 12 25 65 \n
\n\nIf you're still on an earlier version that doesn't support pivot then you can use a manual approach to do the same thing:
\nselect school,\n max(case when class = 'I' then class_size end) as i,\n max(case when class = 'II' then class_size end) as ii,\n max(case when class = 'III' then class_size end) as iii\nfrom classes\ngroup by school\norder by school;\n\nSCHOOL I II III\n------ ---------- ---------- ----------\nS1 23 12 54 \nS2 57 12 81 \nS3 12 25 65 \n
\n\nTo show the total for each school as well, just add a sum:
\nselect school,\n max(case when class = 'I' then class_size end) as i,\n max(case when class = 'II' then class_size end) as ii,\n max(case when class = 'III' then class_size end) as iii,\n sum(class_size) as total\nfrom classes\ngroup by school\norder by school;\n
\n\nTo sum the columns too, you could use rollup():
\nselect school,\n max(case when class = 'I' then class_size end) as i,\n max(case when class = 'II' then class_size end) as ii,\n max(case when class = 'III' then class_size end) as iii,\n sum(class_size) as total\nfrom classes\ngroup by rollup(school)\norder by school;\n\nSCHOOL I II III TOTAL\n------ ---------- ---------- ---------- ----------\nS1 23 12 54 89 \nS2 57 12 81 150 \nS3 12 25 65 102 \n 57 25 81 341 \n
\nSQL Fiddle. But it might be something you should do in your client/application. SQL*Plus can do this automatically with its compute command, for example.
\n
soup wrap:
As Sean Lange, said, use a pivot clause, assuming you're on 11g or higher:
select *
from classes
pivot (max(class_size) as class_size
for (class) in ('I' as i, 'II' as ii, 'III' as iii))
order by school;
SCHOOL I_CLASS_SIZE II_CLASS_SIZE III_CLASS_SIZE
------ ------------ ------------- --------------
S1 23 12 54
S2 57 12 81
S3 12 25 65
If you're still on an earlier version that doesn't support pivot then you can use a manual approach to do the same thing:
select school,
max(case when class = 'I' then class_size end) as i,
max(case when class = 'II' then class_size end) as ii,
max(case when class = 'III' then class_size end) as iii
from classes
group by school
order by school;
SCHOOL I II III
------ ---------- ---------- ----------
S1 23 12 54
S2 57 12 81
S3 12 25 65
To show the total for each school as well, just add a sum:
select school,
max(case when class = 'I' then class_size end) as i,
max(case when class = 'II' then class_size end) as ii,
max(case when class = 'III' then class_size end) as iii,
sum(class_size) as total
from classes
group by school
order by school;
To sum the columns too, you could use rollup():
select school,
max(case when class = 'I' then class_size end) as i,
max(case when class = 'II' then class_size end) as ii,
max(case when class = 'III' then class_size end) as iii,
sum(class_size) as total
from classes
group by rollup(school)
order by school;
SCHOOL I II III TOTAL
------ ---------- ---------- ---------- ----------
S1 23 12 54 89
S2 57 12 81 150
S3 12 25 65 102
57 25 81 341
SQL Fiddle. But it might be something you should do in your client/application. SQL*Plus can do this automatically with its compute command, for example.
qid & accept id:
(25839647, 25840404)
query:
Write query SQLite with selectionArgs
soup:
if you insist on using selectionArgs you can do it like below:
\nfirst check your arguments and then build your query according to it for example:
\n String[] myArray = new String[] { "31", "" ,"3", ""};\n\n List myQueryValue = new ArrayList();\n\n String myQueryParam = "";\n\n if(!myArray[0].equal("")){\n\n myQueryParam = myQueryParam + "UID = ? AND ";\n myQueryValue.add(myArray[0]);\n }\n\n if(!myArray[1].equal("")){\n\n myQueryParam = myQueryParam + "Age > ? AND "; \n myQueryValue.add(myArray[1]);\n }\n\n if(!myArray[2].equal("")){\n\n myQueryParam = myQueryParam + "Room = ? AND ";\n myQueryValue.add(myArray[2]);\n }\n\n if(!myArray[3].equal("")){\n\n myQueryParam = myQueryParam + "AND Adre = ?";\n myQueryValue.add(myArray[3]);\n }\n
\nand at the end
\nString[] finalValue = new String[ myQueryValue.size() ];\nmyQueryValue.toArray( finalValue );\nCursor cur = sqlite_obj.query(TableName, null, myQueryParam, finalValue , null, null, null, null);\n
\nyou can also use loop to create your values and query param.
\n
soup wrap:
if you insist on using selectionArgs you can do it like below:
first check your arguments and then build your query according to it for example:
String[] myArray = new String[] { "31", "" ,"3", ""};
List myQueryValue = new ArrayList();
String myQueryParam = "";
if(!myArray[0].equal("")){
myQueryParam = myQueryParam + "UID = ? AND ";
myQueryValue.add(myArray[0]);
}
if(!myArray[1].equal("")){
myQueryParam = myQueryParam + "Age > ? AND ";
myQueryValue.add(myArray[1]);
}
if(!myArray[2].equal("")){
myQueryParam = myQueryParam + "Room = ? AND ";
myQueryValue.add(myArray[2]);
}
if(!myArray[3].equal("")){
myQueryParam = myQueryParam + "AND Adre = ?";
myQueryValue.add(myArray[3]);
}
and at the end
String[] finalValue = new String[ myQueryValue.size() ];
myQueryValue.toArray( finalValue );
Cursor cur = sqlite_obj.query(TableName, null, myQueryParam, finalValue , null, null, null, null);
you can also use loop to create your values and query param.
qid & accept id:
(25916350, 25917126)
query:
How to change VARCHAR type to DATETIME using ALTER in Postgresql?
soup:
You want the USING clause to ALTER TABLE ... ALTER COLUMN ... TYPE, and the to_timestamp function.
\nALTER TABLE mytable \n ALTER COLUMN thecolumn \n TYPE TIMESTAMP WITH TIME ZONE \n USING to_timestamp(thecolumn, 'YYYY-MM-DD HH24:MI:SS');\n
\nIn this case as the data looks like it's already a valid timestamp you can probably simplify it with a cast instead:
\nALTER TABLE mytable \n ALTER COLUMN thecolumn \n TYPE TIMESTAMP WITH TIME ZONE \n USING to_timestamp(thecolumn::timestamp with time zone);\n
\nYou will note that I've used the type name "timestamp with time zone" instead of "datetime". That's because in PostgreSQL, datetime is just an alias for timestamp without time zone... but in most cases you actually want to use timestamp with time zone instead. To learn more about timestamps, see the manual.
\n
soup wrap:
You want the USING clause to ALTER TABLE ... ALTER COLUMN ... TYPE, and the to_timestamp function.
ALTER TABLE mytable
ALTER COLUMN thecolumn
TYPE TIMESTAMP WITH TIME ZONE
USING to_timestamp(thecolumn, 'YYYY-MM-DD HH24:MI:SS');
In this case as the data looks like it's already a valid timestamp you can probably simplify it with a cast instead:
ALTER TABLE mytable
ALTER COLUMN thecolumn
TYPE TIMESTAMP WITH TIME ZONE
USING to_timestamp(thecolumn::timestamp with time zone);
You will note that I've used the type name "timestamp with time zone" instead of "datetime". That's because in PostgreSQL, datetime is just an alias for timestamp without time zone... but in most cases you actually want to use timestamp with time zone instead. To learn more about timestamps, see the manual.
qid & accept id:
(25940181, 25940389)
query:
Contains at least a count of different character in a set
soup:
A simple solution would be a pattern like this:
\n(.*[abcxyz]){3}\n
\nThis will match zero or more of any character, followed by one of a, b, c, x, y, or z, all of which must appear at least 3 times in the subject string.
\nTo match only strings that contain different letters, you could use a negative lookahead ((?!…)) and a backreference (\N):
\n(.*([abcxyz])(?!.*\2)){3}\n
\nThis will match zero or more of any character, followed by one of a, b, c, x, y, or z, as long as another instance of that character does not appear later in the string (i.e. it will match the last instance of that character in the string), all of which must appear at least 3 times in the subject string.
\nOf course, you can change the {3} to anything you like, but note that will not work if you need to specify a maximum number of times these characters can appear in your string, only the minimum.
\n
soup wrap:
A simple solution would be a pattern like this:
(.*[abcxyz]){3}
This will match zero or more of any character, followed by one of a, b, c, x, y, or z, all of which must appear at least 3 times in the subject string.
To match only strings that contain different letters, you could use a negative lookahead ((?!…)) and a backreference (\N):
(.*([abcxyz])(?!.*\2)){3}
This will match zero or more of any character, followed by one of a, b, c, x, y, or z, as long as another instance of that character does not appear later in the string (i.e. it will match the last instance of that character in the string), all of which must appear at least 3 times in the subject string.
Of course, you can change the {3} to anything you like, but note that will not work if you need to specify a maximum number of times these characters can appear in your string, only the minimum.
qid & accept id:
(25941109, 25942474)
query:
Access query to include all values, including Null
soup:
Try using a wildcard instead:
\nIn (IIf([Forms]![FormQuery]![Completed]=True,"COMPLETED",""),\n IIf([Forms]![FormQuery]![Cancelled]=True,"Cancelled",""),\n IIf([Forms]![FormQuery]![All]=True,[Permits]![Status],"*"))\n
\nYou can test this... just put this in your query criteria field:
\nIIf(True,"*","")\n
\nRun it with False instead of True... experiment.
\nI recommend you change your method. Use a parameter query but avoid the IN() statement. General how-to at http://accessmvp.com/thedbguy/articles/parameterquerybasics.html.
\nAlternatively, use VBA. General how-to at http://answers.microsoft.com/en-us/office/forum/office_2007-access/checkbox-filter-form-for-query/ab65c120-6356-e011-8dfc-68b599b31bf5
\nEither one is more typical and I believe easier to trouble shoot and maintain.
\n
soup wrap:
Try using a wildcard instead:
In (IIf([Forms]![FormQuery]![Completed]=True,"COMPLETED",""),
IIf([Forms]![FormQuery]![Cancelled]=True,"Cancelled",""),
IIf([Forms]![FormQuery]![All]=True,[Permits]![Status],"*"))
You can test this... just put this in your query criteria field:
IIf(True,"*","")
Run it with False instead of True... experiment.
I recommend you change your method. Use a parameter query but avoid the IN() statement. General how-to at http://accessmvp.com/thedbguy/articles/parameterquerybasics.html.
Alternatively, use VBA. General how-to at http://answers.microsoft.com/en-us/office/forum/office_2007-access/checkbox-filter-form-for-query/ab65c120-6356-e011-8dfc-68b599b31bf5
Either one is more typical and I believe easier to trouble shoot and maintain.
qid & accept id:
(25961419, 25961436)
query:
I want to get the maximum value from my S_ID column which is declared as varchar type
soup:
You can get the maximum value by using this construct:
\nselect s_id\nfrom stock_detail\norder by length(s_id) desc, s_id desc\nlimit 1;\n
\nThis puts the longer values first.
\nIf you want to use max(), then you need to deconstruct the number. Something like:
\nselect concat('S_', max(replace(s_id, 'S_', '') + 0))\nfrom stock_detail;\n
\nThis allows you to get a numeric maximum value rather than a character maximum value, which is the root of your problem.
\n
soup wrap:
You can get the maximum value by using this construct:
select s_id
from stock_detail
order by length(s_id) desc, s_id desc
limit 1;
This puts the longer values first.
If you want to use max(), then you need to deconstruct the number. Something like:
select concat('S_', max(replace(s_id, 'S_', '') + 0))
from stock_detail;
This allows you to get a numeric maximum value rather than a character maximum value, which is the root of your problem.
qid & accept id:
(25992186, 25992580)
query:
cast list of strings as int list in sql query / stored procedure
soup:
You can create a Table-Valued Function which takes the nVarChar and creates a new record for each value, where you tell it the delimiter. My example here returns a table with a single Value column, you can then use this as a sub query for your IN Selection :
\nCreate FUNCTION [dbo].[fnSplitVariable]\n(\n @List nvarchar(2000),\n @delimiter nvarchar(5)\n) \nRETURNS @RtnValue table \n(\n\n Id int identity(1,1),\n Variable varchar(15),\n Value nvarchar(100)\n) \nAS \nBEGIN\nDeclare @Count int\nset @Count = 1\n While (Charindex(@delimiter,@List)>0)\n Begin \n Insert Into @RtnValue (Value, Variable)\n Select \n Value = ltrim(rtrim(Substring(@List,1,Charindex(@delimiter,@List)-1))),\n Variable = 'V' + convert(varchar,@Count)\n Set @List = Substring(@List,Charindex(@delimiter,@List)+len(@delimiter),len(@List))\n Set @Count = @Count + 1\n End \n\n Insert Into @RtnValue (Value, Variable)\n Select Value = ltrim(rtrim(@List)), Variable = 'V' + convert(varchar,@Count)\n\n Return\nEND\n
\nThen in your where statement you could do the following:
\nWHERE (b.CityID IN (Select Value from fnSplitVariable(@CityIDs, ','))\n
\nI have included your original Procedure, and updated it to use the function above:
\nALTER PROCEDURE [dbo].[SearchResume]\n @KeywordSearch nvarchar(500),\n @GreaterThanDate datetime,\n @CityIDs nvarchar(500),\n @ProvinceIDs nvarchar(500),\n @CountryIDs nvarchar(500),\n @IndustryIDs nvarchar(500)\n\nAS\nBEGIN\n\nDECLARE @sql as nvarchar(4000)\n\nSET @sql = N'\n DECLARE @KeywordSearch nvarchar(500),\n @CityIDs nvarchar(500),\n @ProvinceIDs nvarchar(500),\n @CountryIDs nvarchar(500),\n @IndustryIDs nvarchar(500) \n\n SET @KeywordSearch = '''+@KeywordSearch+'''\n SET @CityIDs = '''+@CityIDs+'''\n SET @ProvinceIDs = '''+@ProvinceIDs+'''\n SET @CountryIDs = '''+@CountryIDs+'''\n SET @IndustryIDs = '''+@IndustryIDs+'''\nSELECT DISTINCT\n UserID,\n ResumeID,\n CASE a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' + FirstName END as ''Name'',\n a.Description ''ResumeTitle'',\n CurrentTitle,\n ModifiedDate,\n CurrentEmployerName,\n PersonalDescription,\n CareerObjectives,\n CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END ''Salary'',\n e.Description ''EducationLevel'',\n f.Description ''CareerLevel'',\n g.Description ''JobType'',\n h.Description ''Relocate'',\n i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''\n FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID\n LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID\n JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID\n JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID\n JOIN JobType g ON b.JobTypeID = g.JobTypeID\n JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID\n JOIN City i ON b.CityID = i.CityID\n JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID\n JOIN Country k ON k.CountryID = b.CountryID\n WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')\n\n\n '\nIF (LEN(@CityIDs) >0)\nBEGIN\n SET @sql = @sql + N'AND (b.CityID IN (Select Value from fnSplitVariable(@CityIDs,'','') ))'\nEND\nIF (LEN(@ProvinceIDs) >0)\nBEGIN\n SET @sql = @sql + N'AND (b.StateProvinceID IN (Select Value from fnSplitVariable(@ProvinceIDs,'','') ))'\nEND\nIF (LEN(@CountryIDs) >0)\nBEGIN\n SET @sql = @sql + N'AND (b.CountryID IN (Select Value from fnSplitVariable(@CountryIDs,'','') ))'\nEND\nIF (LEN(@IndustryIDs) >0)\nBEGIN\n SET @sql = @sql + N'AND (b.IndustryPreferenceID IN (Select Value from fnSplitVariable(@IndustryIDs,'','') ))'\nEND\n\nIF (LEN(@KeywordSearch) > 0)\nBEGIN\n SET @sql = @sql + N' AND (' + @KeywordSearch + ')'\nEND\n\nSET @sql = @sql + N') ORDER BY ModifiedDate desc'\n\n--select @sql\nexec sp_executesql @sql\n\nEND\n
\n
soup wrap:
You can create a Table-Valued Function which takes the nVarChar and creates a new record for each value, where you tell it the delimiter. My example here returns a table with a single Value column, you can then use this as a sub query for your IN Selection :
Create FUNCTION [dbo].[fnSplitVariable]
(
@List nvarchar(2000),
@delimiter nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Variable varchar(15),
Value nvarchar(100)
)
AS
BEGIN
Declare @Count int
set @Count = 1
While (Charindex(@delimiter,@List)>0)
Begin
Insert Into @RtnValue (Value, Variable)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@delimiter,@List)-1))),
Variable = 'V' + convert(varchar,@Count)
Set @List = Substring(@List,Charindex(@delimiter,@List)+len(@delimiter),len(@List))
Set @Count = @Count + 1
End
Insert Into @RtnValue (Value, Variable)
Select Value = ltrim(rtrim(@List)), Variable = 'V' + convert(varchar,@Count)
Return
END
Then in your where statement you could do the following:
WHERE (b.CityID IN (Select Value from fnSplitVariable(@CityIDs, ','))
I have included your original Procedure, and updated it to use the function above:
ALTER PROCEDURE [dbo].[SearchResume]
@KeywordSearch nvarchar(500),
@GreaterThanDate datetime,
@CityIDs nvarchar(500),
@ProvinceIDs nvarchar(500),
@CountryIDs nvarchar(500),
@IndustryIDs nvarchar(500)
AS
BEGIN
DECLARE @sql as nvarchar(4000)
SET @sql = N'
DECLARE @KeywordSearch nvarchar(500),
@CityIDs nvarchar(500),
@ProvinceIDs nvarchar(500),
@CountryIDs nvarchar(500),
@IndustryIDs nvarchar(500)
SET @KeywordSearch = '''+@KeywordSearch+'''
SET @CityIDs = '''+@CityIDs+'''
SET @ProvinceIDs = '''+@ProvinceIDs+'''
SET @CountryIDs = '''+@CountryIDs+'''
SET @IndustryIDs = '''+@IndustryIDs+'''
SELECT DISTINCT
UserID,
ResumeID,
CASE a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' + FirstName END as ''Name'',
a.Description ''ResumeTitle'',
CurrentTitle,
ModifiedDate,
CurrentEmployerName,
PersonalDescription,
CareerObjectives,
CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END ''Salary'',
e.Description ''EducationLevel'',
f.Description ''CareerLevel'',
g.Description ''JobType'',
h.Description ''Relocate'',
i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''
FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
JOIN JobType g ON b.JobTypeID = g.JobTypeID
JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
JOIN City i ON b.CityID = i.CityID
JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
JOIN Country k ON k.CountryID = b.CountryID
WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')
'
IF (LEN(@CityIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.CityID IN (Select Value from fnSplitVariable(@CityIDs,'','') ))'
END
IF (LEN(@ProvinceIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.StateProvinceID IN (Select Value from fnSplitVariable(@ProvinceIDs,'','') ))'
END
IF (LEN(@CountryIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.CountryID IN (Select Value from fnSplitVariable(@CountryIDs,'','') ))'
END
IF (LEN(@IndustryIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.IndustryPreferenceID IN (Select Value from fnSplitVariable(@IndustryIDs,'','') ))'
END
IF (LEN(@KeywordSearch) > 0)
BEGIN
SET @sql = @sql + N' AND (' + @KeywordSearch + ')'
END
SET @sql = @sql + N') ORDER BY ModifiedDate desc'
--select @sql
exec sp_executesql @sql
END
qid & accept id:
(26001924, 26003864)
query:
Where array does not contain value Postgres
soup:
A simple, "brute-force" method would be to cast the array to text and check:
\nSELECT title, short_url, categories, winning_offer_amount\nFROM auctions\nWHERE ended_at IS NOT NULL\nAND categories::text NOT LIKE '% > %'; -- including blanks?\n
\nA clean and elegant solution with unnest() in a NOT EXISTS semi-join:
\nSELECT title, short_url, categories, winning_offer_amount\nFROM auctions a\nWHERE ended_at IS NOT NULL\nAND NOT EXISTS (\n SELECT 1\n FROM unnest(a.categories) AS cat\n WHERE cat LIKE '% > %'\n );\n
\n\n
soup wrap:
A simple, "brute-force" method would be to cast the array to text and check:
SELECT title, short_url, categories, winning_offer_amount
FROM auctions
WHERE ended_at IS NOT NULL
AND categories::text NOT LIKE '% > %'; -- including blanks?
A clean and elegant solution with unnest() in a NOT EXISTS semi-join:
SELECT title, short_url, categories, winning_offer_amount
FROM auctions a
WHERE ended_at IS NOT NULL
AND NOT EXISTS (
SELECT 1
FROM unnest(a.categories) AS cat
WHERE cat LIKE '% > %'
);
qid & accept id:
(26004152, 26004258)
query:
Combining 2 fields into 1 field
soup:
It's slightly ambiguous, but it sounds like you want to union all the two results together:
\nselect\n c.CustomerID ID,\n c.CustomerName Cname,\n o.TotalAmt,\n o.OrderType\nfrom\n Customers c\n left join\n AM_Orders o\n on c.CustomerID = o.CustomerID\nunion all \nselect\n c.CustomerID ID,\n c.CustomerName Cname,\n o.TotalAmt,\n o.OrderType\nfrom\n Customers c\n left join\n PM_Orders o\n on c.CustomerID = o.CustomerID\norder by\n ID;\n
\nor as Tab suggeted, union first then join. This might deal better with cases where there's an entry in one table but not the other:
\n;with all_orders as (\n select\n o.CustomerID,\n o.TotalAmt,\n o.OrderType\n from\n AM_Orders o\n union all\n select\n o.CustomerID,\n o.TotalAmt,\n o.OrderType\n from\n PM_Orders o\n) select\n c.CustomerID ID,\n c.CustomerName Cname,\n a.TotalAmt,\n a.OrderType\nfrom\n Customers c\n left join\n all_orders a\n on c.CustomerID = a.CustomerID\norder by\n ID;\n
\n
soup wrap:
It's slightly ambiguous, but it sounds like you want to union all the two results together:
select
c.CustomerID ID,
c.CustomerName Cname,
o.TotalAmt,
o.OrderType
from
Customers c
left join
AM_Orders o
on c.CustomerID = o.CustomerID
union all
select
c.CustomerID ID,
c.CustomerName Cname,
o.TotalAmt,
o.OrderType
from
Customers c
left join
PM_Orders o
on c.CustomerID = o.CustomerID
order by
ID;
or as Tab suggeted, union first then join. This might deal better with cases where there's an entry in one table but not the other:
;with all_orders as (
select
o.CustomerID,
o.TotalAmt,
o.OrderType
from
AM_Orders o
union all
select
o.CustomerID,
o.TotalAmt,
o.OrderType
from
PM_Orders o
) select
c.CustomerID ID,
c.CustomerName Cname,
a.TotalAmt,
a.OrderType
from
Customers c
left join
all_orders a
on c.CustomerID = a.CustomerID
order by
ID;
qid & accept id:
(26019476, 26020101)
query:
H2 equivalent to Oracle's user
soup:
Isn't USER a function in H2?
\nSELECT USER()\n
\nwill return the current user. Works as expected as a default value for a column:
\ncreate table MY_TABLE(\n CREATED_BY Varchar2(100) DEFAULT USER() NOT NULL,\n value Varchar2(10)\n)\nINSERT INTO MY_TABLE (value) VALUES ('XXX');\n
\nAs an other user:
\nINSERT INTO MY_TABLE (value) VALUES ('YYY');\nSELECT * FROM MY_TABLE;\n
\nResult:
\nCREATED_BY VALUE \nSA XXX\nSYLVAIN YYY\n
\n
soup wrap:
Isn't USER a function in H2?
SELECT USER()
will return the current user. Works as expected as a default value for a column:
create table MY_TABLE(
CREATED_BY Varchar2(100) DEFAULT USER() NOT NULL,
value Varchar2(10)
)
INSERT INTO MY_TABLE (value) VALUES ('XXX');
As an other user:
INSERT INTO MY_TABLE (value) VALUES ('YYY');
SELECT * FROM MY_TABLE;
Result:
CREATED_BY VALUE
SA XXX
SYLVAIN YYY
qid & accept id:
(26046622, 26046922)
query:
T-SQL: efficiently DELETE records in right table that are not in left table when using RIGHT JOIN
soup:
DELETE FROM [FACT]\nWHERE NOT EXISTS (SELECT 1\n FROM [DIMENSION]\n WHERE [FACT].[FK] = [DIMENSION].[PK]\n AND [FACT].[TYPE] LIKE 'LAB%')\n
\nSince these are FACT and DIM tables I think you will be deleting Large amount of data, otherwise you wouldn't care much about the performance. Another thing you can consider when delete large amount of data is, Deleting it in Smaller chunks. By doing something as below
\nDECLARE @Deleted_Rows INT;\nSET @Deleted_Rows = 1;\n\n\nWHILE (@Deleted_Rows > 0)\n BEGIN\n -- Delete some small number of rows at a time\n DELETE TOP (10000) FROM [FACT]\n WHERE NOT EXISTS (SELECT 1\n FROM [DIMENSION]\n WHERE [FACT].[FK] = [DIMENSION].[PK]\n AND [FACT].[TYPE] LIKE 'LAB%')\n\n SET @Deleted_Rows = @@ROWCOUNT;\nEND\n
\n
soup wrap:
DELETE FROM [FACT]
WHERE NOT EXISTS (SELECT 1
FROM [DIMENSION]
WHERE [FACT].[FK] = [DIMENSION].[PK]
AND [FACT].[TYPE] LIKE 'LAB%')
Since these are FACT and DIM tables I think you will be deleting Large amount of data, otherwise you wouldn't care much about the performance. Another thing you can consider when delete large amount of data is, Deleting it in Smaller chunks. By doing something as below
DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
-- Delete some small number of rows at a time
DELETE TOP (10000) FROM [FACT]
WHERE NOT EXISTS (SELECT 1
FROM [DIMENSION]
WHERE [FACT].[FK] = [DIMENSION].[PK]
AND [FACT].[TYPE] LIKE 'LAB%')
SET @Deleted_Rows = @@ROWCOUNT;
END
qid & accept id:
(26063286, 26064572)
query:
Matching First and Last Name on two different tables
soup:
Provided you use a 3rd Table to hold you Long/Short Names as so.
\nCREATE TABLE TableNames\n ([Id] int, [OfficialName] varchar(7), [Alias] varchar(7))\n;\n\nINSERT INTO TableNames\n ([Id], [OfficialName], [Alias])\nVALUES\n (1, 'Andrew', 'Andy'),\n (2, 'Andrew', 'Andrew'),\n (3, 'William', 'Bill'),\n (4, 'William', 'William'),\n (5, 'David', 'Dave'),\n (6, 'David', 'David')\n
\nThe following query should give you what you are looking for.
\nSELECT *\nFROM (\n SELECT TableA.Id AS T1_Id\n ,CompanyId AS T1_CompanyId\n ,FirstName AS T1_FirstName\n ,LastName AS T1_LastName\n ,TableNames.OfficialName AS OfficialName\n FROM tableA\n INNER JOIN tableNames ON TableA.FirstName = TableNames.Alias\n ) T1\n ,(\n SELECT tableB.Id AS T2_Id\n ,CompanyId AS T2_CompanyId\n ,FirstName AS T2_FirstName\n ,LastName AS T2_LastName\n ,TableNames.OfficialName AS OfficialName\n FROM tableB\n INNER JOIN tableNames ON TableB.FirstName = TableNames.Alias\n ) T2\nWHERE T1.T1_CompanyId = T2.T2_CompanyId\n AND T1.OfficialName = T2.OfficialName\n AND T1.T1_LastName = T2.T2_LastName\n
\nI set up my solution sqlfiddle at http://sqlfiddle.com/#!3/64514/2
\nI hope this helps.
\n
soup wrap:
Provided you use a 3rd Table to hold you Long/Short Names as so.
CREATE TABLE TableNames
([Id] int, [OfficialName] varchar(7), [Alias] varchar(7))
;
INSERT INTO TableNames
([Id], [OfficialName], [Alias])
VALUES
(1, 'Andrew', 'Andy'),
(2, 'Andrew', 'Andrew'),
(3, 'William', 'Bill'),
(4, 'William', 'William'),
(5, 'David', 'Dave'),
(6, 'David', 'David')
The following query should give you what you are looking for.
SELECT *
FROM (
SELECT TableA.Id AS T1_Id
,CompanyId AS T1_CompanyId
,FirstName AS T1_FirstName
,LastName AS T1_LastName
,TableNames.OfficialName AS OfficialName
FROM tableA
INNER JOIN tableNames ON TableA.FirstName = TableNames.Alias
) T1
,(
SELECT tableB.Id AS T2_Id
,CompanyId AS T2_CompanyId
,FirstName AS T2_FirstName
,LastName AS T2_LastName
,TableNames.OfficialName AS OfficialName
FROM tableB
INNER JOIN tableNames ON TableB.FirstName = TableNames.Alias
) T2
WHERE T1.T1_CompanyId = T2.T2_CompanyId
AND T1.OfficialName = T2.OfficialName
AND T1.T1_LastName = T2.T2_LastName
I set up my solution sqlfiddle at http://sqlfiddle.com/#!3/64514/2
I hope this helps.
qid & accept id:
(26063793, 26065069)
query:
Finding duplicate records from table and deleting all but one with latest date
soup:
Start with a SELECT query which identifies the rows you want deleted.
\nSELECT y.CreatedBy, y.FileId, y.FileName, y.CreationDate\nFROM YourTable AS y\nWHERE\n y.CreationDate < \n DMax(\n "CreationDate",\n "YourTable",\n "FileName='" & y.FileName & "'"\n );\n
\nAfter you verify that query identifies the correct rows, convert it to a DELETE query.
\nDELETE\nFROM YourTable AS y\nWHERE\n y.CreationDate < \n DMax(\n "CreationDate",\n "YourTable",\n "FileName='" & y.FileName & "'"\n );\n
\n
soup wrap:
Start with a SELECT query which identifies the rows you want deleted.
SELECT y.CreatedBy, y.FileId, y.FileName, y.CreationDate
FROM YourTable AS y
WHERE
y.CreationDate <
DMax(
"CreationDate",
"YourTable",
"FileName='" & y.FileName & "'"
);
After you verify that query identifies the correct rows, convert it to a DELETE query.
DELETE
FROM YourTable AS y
WHERE
y.CreationDate <
DMax(
"CreationDate",
"YourTable",
"FileName='" & y.FileName & "'"
);
qid & accept id:
(26088814, 26088920)
query:
How to select every Monday date and every Friday date in the year
soup:
Here's one way (you might need to check which day of the week is setup to be the first, here I have Sunday as the first day of the week)
\nYou can use a table with many rows (more than 365) to CROSS JOIN to in order to get a run of dates (a tally table).
\nMy sys columns has over 800 rows in, you could use any other table or even CROSS JOIN a table onto itself to multiply up the number of rows
\nHere I used the row_number function to get a running count of rows and incremented the date by 1 day for each row:
\nselect \ndateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \nfrom sys.columns a\n
\nWith the result set of dates now, it's trivial to check the day of week using datepart()
\nSELECT\n dt, \n datename(dw, dt) \nFROM \n (\n select \n dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \n from \n sys.columns a\n ) as dates \nWHERE \n(datepart(dw, dates.dt) = 2 OR datepart(dw, dates.dt) = 6)\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\n
\nEdit:
\nHere's an example SqlFiddle
\nhttp://sqlfiddle.com/#!6/d41d8/21757
\nEdit 2:
\nIf you want them on the same row, days of the week at least are constant, you know Friday is always 4 days after Monday so do the same but only look for Mondays, then just add 4 days to the Monday...
\nSELECT\n dt as MonDate, \n datename(dw, dt) as MonDateName,\n dateadd(d, 4, dt) as FriDate,\n datename(dw, dateadd(d, 4, dt)) as FriDateName\nFROM \n (\n select \n dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \n from \n sys.columns a\n ) as dates \nWHERE \ndatepart(dw, dates.dt) = 2\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\n
\nExample SqlFiddle for this:
\nhttp://sqlfiddle.com/#!6/d41d8/21764
\n(note that only a few rows come back because sys.columns is quite small on the SqlFiddle server, try another system table if this is a problem)
\n
soup wrap:
Here's one way (you might need to check which day of the week is setup to be the first, here I have Sunday as the first day of the week)
You can use a table with many rows (more than 365) to CROSS JOIN to in order to get a run of dates (a tally table).
My sys columns has over 800 rows in, you could use any other table or even CROSS JOIN a table onto itself to multiply up the number of rows
Here I used the row_number function to get a running count of rows and incremented the date by 1 day for each row:
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from sys.columns a
With the result set of dates now, it's trivial to check the day of week using datepart()
SELECT
dt,
datename(dw, dt)
FROM
(
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from
sys.columns a
) as dates
WHERE
(datepart(dw, dates.dt) = 2 OR datepart(dw, dates.dt) = 6)
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
Edit:
Here's an example SqlFiddle
http://sqlfiddle.com/#!6/d41d8/21757
Edit 2:
If you want them on the same row, days of the week at least are constant, you know Friday is always 4 days after Monday so do the same but only look for Mondays, then just add 4 days to the Monday...
SELECT
dt as MonDate,
datename(dw, dt) as MonDateName,
dateadd(d, 4, dt) as FriDate,
datename(dw, dateadd(d, 4, dt)) as FriDateName
FROM
(
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from
sys.columns a
) as dates
WHERE
datepart(dw, dates.dt) = 2
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
Example SqlFiddle for this:
http://sqlfiddle.com/#!6/d41d8/21764
(note that only a few rows come back because sys.columns is quite small on the SqlFiddle server, try another system table if this is a problem)
qid & accept id:
(26102456, 26102572)
query:
I need a check constraint on two columns, at least one must be not null
soup:
This can be done with a check constraint that verifies null value and matches the result with or
\ncreate table #t (i int\n , j int\n , constraint chk_null check (i is not null or j is not null))\n
\nThe following are the test cases
\ninsert into #t values (null, null) --> error\ninsert into #t values (1, null) --> ok\ninsert into #t values (null, 1) --> ok\ninsert into #t values (1, 1) --> ok\n
\n
soup wrap:
This can be done with a check constraint that verifies null value and matches the result with or
create table #t (i int
, j int
, constraint chk_null check (i is not null or j is not null))
The following are the test cases
insert into #t values (null, null) --> error
insert into #t values (1, null) --> ok
insert into #t values (null, 1) --> ok
insert into #t values (1, 1) --> ok
qid & accept id:
(26162762, 26163259)
query:
sql select query from a single table, results separated by intervals
soup:
Given a tableyour_table with columnsts timestamp/datetime, val int one option if you want to group by minute would be to deduct the seconds part of the date and group by that.
\nThe same concept should be possible to use for other intervals.
\nUsing MS SQL it would be:
\nselect \n dateadd(second, -DATEPART(second,ts),ts) as ts, \n SUM(val) as v_sum \nfrom your_table\ngroup by dateadd(second, -DATEPART(second,ts),ts)\n
\nI think the Postgresql could be this:
\nSELECT \n date_trunc('minute', ts),\n sum(val) v_sum \nFROM\n your_table\nGROUP BY date_trunc('minute', ts)\nORDER BY 1\n
\nI tried the MSSQL version and got the desired result, but as SQL Fiddle is down at the moment I couldn't try the PG version. and also the PG version, which seems to work.
\n
soup wrap:
Given a tableyour_table with columnsts timestamp/datetime, val int one option if you want to group by minute would be to deduct the seconds part of the date and group by that.
The same concept should be possible to use for other intervals.
Using MS SQL it would be:
select
dateadd(second, -DATEPART(second,ts),ts) as ts,
SUM(val) as v_sum
from your_table
group by dateadd(second, -DATEPART(second,ts),ts)
I think the Postgresql could be this:
SELECT
date_trunc('minute', ts),
sum(val) v_sum
FROM
your_table
GROUP BY date_trunc('minute', ts)
ORDER BY 1
I tried the MSSQL version and got the desired result, but as SQL Fiddle is down at the moment I couldn't try the PG version. and also the PG version, which seems to work.
qid & accept id:
(26167223, 26167258)
query:
How can I count a column with values
soup:
select columnname, count(*)\nfrom YourTable\ngroup by columnName\n
\nor
\nselect \nsum(case when columnname='present' then =1 end) 'present',\nsum(case when columnname='absent' then =1 end) 'absent',\nsum(case when columnname='leave' then =1 end) 'leave'\nfrom myTable\n
\n
soup wrap:
select columnname, count(*)
from YourTable
group by columnName
or
select
sum(case when columnname='present' then =1 end) 'present',
sum(case when columnname='absent' then =1 end) 'absent',
sum(case when columnname='leave' then =1 end) 'leave'
from myTable
qid & accept id:
(26208027, 26208078)
query:
Select from two tables without using an OR
soup:
If the requirement is to not use an OR, you could use UNION instead. Since you filter the department on its number, not on its name, you do not need the second table at all:
\nSELECT name FROM employee WHERE salary > 20000\n UNION\nSELECT name FROM employee WHERE dNumber = 1\n
\nIf you wanted to filter the department by name, a join or a subquery would be required:
\nSELECT name FROM employee WHERE salary > 20000\n UNION\nSELECT name FROM employee e\nJOIN department d ON e.dNumber=d.departmentNumber\nWHERE departmentName = 'math'\n
\n
soup wrap:
If the requirement is to not use an OR, you could use UNION instead. Since you filter the department on its number, not on its name, you do not need the second table at all:
SELECT name FROM employee WHERE salary > 20000
UNION
SELECT name FROM employee WHERE dNumber = 1
If you wanted to filter the department by name, a join or a subquery would be required:
SELECT name FROM employee WHERE salary > 20000
UNION
SELECT name FROM employee e
JOIN department d ON e.dNumber=d.departmentNumber
WHERE departmentName = 'math'
qid & accept id:
(26227103, 26228641)
query:
SQL query inner join tables, print to HTML